摘要

kafka源码阅读第一步需要搭建kafka的源码环境,kafka的源码本次搭建步骤如下:
1.环境准备
2.idea的scala插件安装
3.github下源代码下载对应版本
4.修改配置
5.配置gradle(重要)
6.kafka自动编译
7.启动kafka配置
8.启动kafka
9.启动日志分析
10.验证IDEA中启动的Kafka可用

步骤

1.环境

idea2020
jdk1.8
gradle3.5
scala2.11.8
zookeeper(单机集群都可以)
kafka-0.10.1

2.idea提前设置

首先安装scala插件
idea的设置里面左侧有一个“Plugins”,搜索scala相关的插件,此时一开始是找不到的,然后点击“search in repositories”,找到一个“Scala”插件,他的类别是“Language”,在线装即可,他会下载之后安装。安装后如下图:
image.png

3.克隆源码

git clone https://github.com/apache/kafka.git

拉取远程分支:

git fetch origin

切换到目的分支

git checkout -b 0.10.1 origin/0.10.1

这个时候切记不能先用idea直接打开项目!

2.打包环境

kafka自带了一些Gradle的Task,可以生成出导入Eclipse或者Idea配置。
在Kafka目录下执行

gradle jar
gradle idea

这个时候目录下会出现一个文件叫kafka.ipr
在finder中双击这个文件,idea会自动打开并导入项目。一般Idea打开会,右下角会弹出一个框,意思是:我们检测出这个是Gradle项目,需要导入Gradle的配置吗?
这个时候,点击确认就行。

如果打开Idea啥也没发生,那么就需要我们自己打开文件build.gradle
注:也就是这个时候才会打开Idea
image.png

4.修改配置

4.1 gradle.build文件

文件添加:

ScalaCompileOptions.metaClass.daemonServer = true
ScalaCompileOptions.metaClass.fork = true
ScalaCompileOptions.metaClass.useAnt = false
ScalaCompileOptions.metaClass.useCompileDaemon = false

4.2 创建文件

创建log目录和 data目录
创建resources目录,将config下的log4j.properties文件放到resources目录下
image.png

4.3 修改文件

修改 config/server.properties 文件中的 ${kafka.logs.dir} 都改到新创建的log目录。
修改config/server.properties 文件中 log.dirs 改为新创建的 data目录。

5.配置gradle

注:配置次gradle特别重要,有时候我们出现的编译失败的问题很多时候是由于本地gradle配置问题,比如:

* Where:
Build file 'D:\idePro\kafka0.1.0.1\kafka-0.10.1\kafka-0.10.1\build.gradle' line: 305

* What went wrong:
A problem occurred evaluating root project 'kafka-0.10.1'.
> Cannot set the value of read-only property 'additionalSourceDirs' for task ':jacocoRootReport' of type org.gradle.testing.jacoco.tasks.JacocoReport.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/6.5.1/userguide/command_line_interface.html#sec:command_line_warnings

image.png

6.kafka自动编译

设置好上面步骤后,kafka会根据本地的gradle进行自动编译。
导入的过程也需要不少的时间,需要耐心等待,会显示的是如下的图:
image.png
最后导入成功如下图:
image.png

7.启动kafka配置

如果我们要在IntelliJ IDEA里启动kafka,通过源码的方式来启动。
此外,Kafka的启动类是“kafka.Kafka”,他是要读取“server.properties”文件的,必须给他指定这个文件的所在位置才可以,在idea右上方的启动菜单栏里,有一个“Run”菜单,点击后,里面有一个“Edit Configuration”菜单,点击这个
image.png
出现上图之后,选择“+”号,然后选择“Application”,“Name”输入为“Kafka”,“Main Class”输入为“kafka.Kafka”,“Program arguments”输入为“config/server.properties”,“use classpath of module”输入为“c!
image.png

8.启动kafka

8.1 先启动一个zk.

启动zk参考:虚拟机-zookeeper集群搭建:https://segmentfault.com/a/11...

8.2 运行application

首次启动时候会有一个漫长等待过程
image.png

启动后效果:
kafka的数据文件生成:
image.png
kafka日志文件生成:
image.png

9.启动日志分析

kafka的启动日志可以让我们更加明白kafka的运行原理。

Building project 'core' with Scala version 2.10.6
:clients:compileJava UP-TO-DATE
:clients:processResources NO-SOURCE
:clients:classes UP-TO-DATE
:clients:determineCommitId UP-TO-DATE
:clients:createVersionFile
:clients:jar UP-TO-DATE
:core:compileJava NO-SOURCE
:core:compileScala UP-TO-DATE
:core:processResources UP-TO-DATE
:core:classes UP-TO-DATE
:core:Kafka.main()
------------首先打印出kafka的基本信息----------------
[2021-06-05 09:11:06,096] INFO KafkaConfig values:
    background.threads = 10
    broker.id = 0
    broker.id.generation.enable = true
    compression.type = producer
    connections.max.idle.ms = 600000
    log.dirs = /Users/xiexinming/code/opnesource/kafka/data
    log.flush.interval.messages = 9223372036854775807
    log.flush.interval.ms = null
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.fetchers = 1
    port = 9092
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 10000
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 30000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    reserved.broker.max.id = 1000
    unclean.leader.election.enable = true
    zookeeper.connect = localhost:2181
    zookeeper.connection.timeout.ms = 6000
    zookeeper.session.timeout.ms = 6000
    zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
 -----------------开始启动kafka-------------------------
[2021-06-05 09:11:06,128] INFO starting (kafka.server.KafkaServer)
[2021-06-05 09:11:06,138] INFO [ThrottledRequestReaper-Fetch], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
[2021-06-05 09:11:06,139] INFO [ThrottledRequestReaper-Produce], Starting  (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
------------------连接zk-------------------------------
[2021-06-05 09:11:06,141] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2021-06-05 09:11:06,148] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
-------------------打印常见的环境变量--------------------
[2021-06-05 09:11:06,168] INFO Client environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:host.name=192.168.1.8 (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:java.version=1.8.0_212 (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_212.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:java.class.path=/Users/xiexinming/code/opnesource/kafka/core/build/classes/main:/Users/xiexinming/code/opnesource/kafka/core/build/resources/main:/Users/xiexinming/code/opnesource/kafka/clients/build/libs/kafka-clients-0.10.1.2-SNAPSHOT.jar:/Users/xiexinming/software/gradle-3.5/caches/modules-2/files-2.1/net.sf.jopt-simple/jopt-simple/4.9/ee9e9eaa0a35360dcfeac129ff4923215fd65904/jopt-simple-4.9.jar:/Users/xiexinming/software/gradle-3.5/caches/modules-2/files-2.1/com.yammer.metrics/metrics-core/2.2.0/f82c035cfa786d3cbec362c38c22a5f5b1bc8724/metrics-core-2.2.0.jar:/Users/xiexinming/software/gradle-3.5/caches/modules-2/files-2.1/org.scala-lang/scala-library/2.10.6/421989aa8f95a05a4f894630aad96b8c7b828732/scala-library-2.10.6.jar:/Users/xiexinming/software/gradle-3.5/caches/modules-2/files-2.1/org.slf4j/slf4j-log4j12/1.7.21/7238b064d1aba20da2ac03217d700d91e02460fa/slf4j-log4j12-1.7.21.jar:/Users/xiexinming/software/gradle-3.5/caches/modules-2/files-2.1/com.101tec/zkclient/0.9/822482d08b9a0af9c43e8d1b6cb94e81ee3e361c/zkclient-0.9.jar:/Users/xiexinming/software/gradle-3.5/caches/modules-2/files-2.1/org.apache.zookeeper/zookeeper/3.4.8/933ea2ed15e6a0e24b788973e3d128ff163c3136/zookeeper-3.4.8.jar:/Users/xiexinming/software/gradle-3.5/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar:/Users/xiexinming/software/gradle-3.5/caches/modules-2/files-2.1/org.xerial.snappy/snappy-java/1.1.2.6/48d92871ca286a47f230feb375f0bbffa83b85f6/snappy-java-1.1.2.6.jar:/Users/xiexinming/software/gradle-3.5/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.21/139535a69a4239db087de9bab0bee568bf8e0b70/slf4j-api-1.7.21.jar:/Users/xiexinming/software/gradle-3.5/caches/modules-2/files-2.1/log4j/log4j/1.2.17/5af35056b4d257e4b64b9e8069c0746e8b08629f/log4j-1.2.17.jar (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:java.library.path=/Users/xiexinming/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:java.io.tmpdir=/var/folders/ys/5ccl5rr102v97z6t4mkpl4sc0000gn/T/ (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:os.version=10.15.4 (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:user.name=xiexinming (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:user.home=/Users/xiexinming (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,168] INFO Client environment:user.dir=/Users/xiexinming/code/opnesource/kafka (org.apache.zookeeper.ZooKeeper)
-------------------初始化客户端连接--------------------
[2021-06-05 09:11:06,169] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@4f9a3314 (org.apache.zookeeper.ZooKeeper)
[2021-06-05 09:11:06,181] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
[2021-06-05 09:11:06,183] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-06-05 09:11:06,230] INFO Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2021-06-05 09:11:06,243] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x179d9b6cd970000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2021-06-05 09:11:06,244] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2021-06-05 09:11:06,323] INFO Cluster ID = dw2lwJNoQCGe1ZhcUYOhww (kafka.server.KafkaServer)
[2021-06-05 09:11:06,364] INFO Loading logs. (kafka.log.LogManager)
[2021-06-05 09:11:06,367] INFO Logs loading complete in 3 ms. (kafka.log.LogManager)
[2021-06-05 09:11:06,433] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2021-06-05 09:11:06,434] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2021-06-05 09:11:06,437] WARN No meta.properties file under dir /Users/xiexinming/code/opnesource/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
-------------------kafka的Acceptor线程监听端口的9092是否有建立连接--------------------
[2021-06-05 09:11:06,464] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2021-06-05 09:11:06,472] INFO [Socket Server on Broker 0], Started 1 acceptor threads (kafka.network.SocketServer)
[2021-06-05 09:11:06,482] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-06-05 09:11:06,483] INFO [ExpirationReaper-0], Starting  (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
-------------------创建kafka的controller主节点,一节kafka集群必须有一个controller--------------------
[2021-06-05 09:11:06,504] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2021-06-05 09:11:06,511] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
-------------------controller成为一个leader--------------------
[2021-06-05 09:11:06,511] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2021-06-05 09:11:06,813] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
-------------------kafka自己创建一个id=0的节点--------------------
[2021-06-05 09:11:06,852] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
[2021-06-05 09:11:06,854] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
-------------------kafka节点id=0注册到zookeeper上去,别人才能感知到--------------------
[2021-06-05 09:11:06,855] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(192.168.1.8,9092,PLAINTEXT) (kafka.utils.ZkUtils)
[2021-06-05 09:11:06,855] WARN No meta.properties file under dir /Users/xiexinming/code/opnesource/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2021-06-05 09:11:06,863] INFO Kafka version : 0.10.1.2-SNAPSHOT (org.apache.kafka.common.utils.AppInfoParser)
[2021-06-05 09:11:06,864] INFO Kafka commitId : 9bdd8d6b60f816c9 (org.apache.kafka.common.utils.AppInfoParser)
[2021-06-05 09:11:06,864] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
-------------------副本拉取消息--------------------
[2021-06-05 09:11:06,985] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions __consumer_offsets-22,__consumer_offsets-30,send_message_response-0,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,push_message-0,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.server.ReplicaFetcherManager)
[2021-06-05 09:11:07,003] INFO Completed load of log __consumer_offsets-0 with 1 log segments and log end offset 0 in 13 ms (kafka.log.Log)
[2021-06-05 09:11:07,004] INFO Created log for partition [__consumer_offsets,0] in /Users/xiexinming/code/opnesource/kafka/data with properties {compression.type -> producer, message.format.version -> 0.10.1-IV2, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> true, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 104857600, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)

10.验证IDEA中启动的Kafka可用

编写kafka生产者Producer跟消费者Consumer的客户端代码,连接本地的IDEA中的kafka,然后发送消息。
先启动消费者:ConsumerDemo
image.png

再次启动生产者:ProducerDemo
image.png
IDEA中kafka的日志输出:

[2021-06-05 10:11:06,874] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2021-06-05 10:11:18,011] INFO [GroupCoordinator 0]: Preparing to restabilize group groupId with old generation 1 (kafka.coordinator.GroupCoordinator)
[2021-06-05 10:11:18,012] INFO [GroupCoordinator 0]: Group groupId with generation 2 is now empty (kafka.coordinator.GroupCoordinator)
[2021-06-05 10:11:35,180] INFO [GroupCoordinator 0]: Preparing to restabilize group groupId with old generation 2 (kafka.coordinator.GroupCoordinator)
[2021-06-05 10:11:35,180] INFO [GroupCoordinator 0]: Stabilized group groupId generation 3 (kafka.coordinator.GroupCoordinator)
[2021-06-05 10:11:35,183] INFO [GroupCoordinator 0]: Assignment received from leader for group groupId for generation 3 (kafka.coordinator.GroupCoordinator)

发现我们的源代码执行正常。

补充:

kafka的生产者/消费者源代码地址:https://github.com/startshine...


startshineye
91 声望26 粉丝

我在规定的时间内,做到了我计划的事情;我自己也变得自信了,对于外界的人跟困难也更加从容了,我已经很强大了。可是如果我在规定时间内,我只有3分钟热度,哎,我不行,我就放弃了,那么这个就是我自己的问题,因为你自己...