Docker 部署 Kafka 完整指南
本指南将详细介绍如何使用 Docker 部署 Kafka 消息队列系统,包括单节点和集群模式的部署方式。
1. 单节点部署 (Zookeeper + Kafka)
1.1 创建 docker-compose.yml 文件
version: '3.8'services:zookeeper:image: bitnami/zookeeper:3.8container_name: zookeeperports:- "2181:2181"environment:- ALLOW_ANONYMOUS_LOGIN=yesvolumes:- zookeeper_data:/bitnami/zookeeperkafka:image: bitnami/kafka:3.4container_name: kafkaports:- "9092:9092"environment:- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181- ALLOW_PLAINTEXT_LISTENER=yes- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://${HOST_IP}:9092- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=truevolumes:- kafka_data:/bitnami/kafkadepends_on:- zookeepervolumes:zookeeper_data:driver: localkafka_data:driver: local
1.2 启动服务
export HOST_IP=$(hostname -I | awk '{print $1}')
docker-compose up -d
2. KRaft 模式部署 (无 Zookeeper)
2.1 创建 docker-compose.yml 文件
version: '3.8'services:kafka:image: bitnami/kafka:3.4container_name: kafkaports:- "9092:9092"environment:- KAFKA_CFG_PROCESS_ROLES=controller,broker- KAFKA_CFG_NODE_ID=1- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kafka:9093- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://${HOST_IP}:9092- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=truevolumes:- kafka_data:/bitnami/kafkavolumes:kafka_data:driver: local
2.2 启动服务
export HOST_IP=$(hostname -I | awk '{print $1}')
docker-compose up -d
3. 集群部署 (3节点)
3.1 创建 docker-compose.yml 文件
version: '3.8'services:zookeeper:image: bitnami/zookeeper:3.8container_name: zookeeperports:- "2181:2181"environment:- ALLOW_ANONYMOUS_LOGIN=yes- ZOO_SERVER_ID=1- ZOO_SERVERS=0.0.0.0:2888:3888volumes:- zookeeper_data:/bitnami/zookeeperkafka1:image: bitnami/kafka:3.4container_name: kafka1ports:- "9092:9092"environment:- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181- ALLOW_PLAINTEXT_LISTENER=yes- KAFKA_CFG_BROKER_ID=1- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://${HOST_IP}:9092- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=truevolumes:- kafka1_data:/bitnami/kafkadepends_on:- zookeeperkafka2:image: bitnami/kafka:3.4container_name: kafka2ports:- "9093:9093"environment:- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181- ALLOW_PLAINTEXT_LISTENER=yes- KAFKA_CFG_BROKER_ID=2- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://${HOST_IP}:9093- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=truevolumes:- kafka2_data:/bitnami/kafkadepends_on:- zookeeperkafka3:image: bitnami/kafka:3.4container_name: kafka3ports:- "9094:9094"environment:- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181- ALLOW_PLAINTEXT_LISTENER=yes- KAFKA_CFG_BROKER_ID=3- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://${HOST_IP}:9094- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=truevolumes:- kafka3_data:/bitnami/kafkadepends_on:- zookeepervolumes:zookeeper_data:driver: localkafka1_data:driver: localkafka2_data:driver: localkafka3_data:driver: local
3.2 启动集群
export HOST_IP=$(hostname -I | awk '{print $1}')
docker-compose up -d
4. 基本操作验证
4.1 创建主题
docker exec -it kafka kafka-topics.sh --create \--bootstrap-server localhost:9092 \--replication-factor 1 \--partitions 3 \--topic test-topic
4.2 生产消息
docker exec -it kafka kafka-console-producer.sh \--bootstrap-server localhost:9092 \--topic test-topic
4.3 消费消息
docker exec -it kafka kafka-console-consumer.sh \--bootstrap-server localhost:9092 \--topic test-topic \--from-beginning
4.4 查看主题列表
docker exec -it kafka kafka-topics.sh --list \--bootstrap-server localhost:9092
5. 管理技巧
5.1 数据持久化
所有数据卷都配置在 volumes
部分,确保数据不会因容器重启而丢失。
5.2 监控配置
docker stats kafka zookeeper
5.3 日志查看
docker logs -f kafka
5.4 停止和清理
docker-compose down
# 如需删除数据卷
docker-compose down -v
6. 注意事项
- 生产环境建议使用 KRaft 模式或至少 3 个节点的集群
- 根据实际需求调整
KAFKA_CFG_ADVERTISED_LISTENERS
- 数据卷路径可根据需要修改
- 内存限制可通过
-m
参数设置 - 安全配置建议添加认证机制
通过以上配置,您可以快速部署适用于开发和测试环境的 Kafka 服务。生产环境请根据实际需求调整配置参数和安全设置。