您的位置:首页 > 教育 > 锐评 > 设计制作内容_网站宣传单_市场营销手段有哪四种_手机app推广平台

设计制作内容_网站宣传单_市场营销手段有哪四种_手机app推广平台

2025/1/11 12:58:31 来源:https://blog.csdn.net/Mrxiao_bo/article/details/144971327  浏览:    关键词:设计制作内容_网站宣传单_市场营销手段有哪四种_手机app推广平台
设计制作内容_网站宣传单_市场营销手段有哪四种_手机app推广平台

前言

在现代的数据架构中,Kafka作为一种流行的分布式消息中间件,广泛应用于日志收集、实时数据流处理等场景。然而,搭建Kafka集群并非易事,尤其是对于初学者而言,从安装Zookeeper到配置Kafka、再到管理和监控集群,整个过程都充满了挑战。幸运的是,Docker Compose为我们提供了一个快速、简便的部署方式,而Redpanda Console作为一款优秀的可视化工具,可以帮助我们直观地管理Kafka集群。本文将教你如何使用Docker Compose搭建Kafka与Zookeeper,并结合Redpanda Console进行集群监控,帮助你提升对Kafka的操作效率。

如果你对于docker、docker-compose不了解,可以看下面⬇️

docker-compose文件详解以及常用命令

单体搭建

挂载目录创建

image-20250106101416961

对于右边的properties文件,要先创建好kafka,将配置文件复制进来

 1043  docker cp acowbo-data-center-kafka:/opt/bitnami/kafka/config/server.properties ./1045  docker cp acowbo-data-center-kafka:/opt/bitnami/kafka/config/zookeeper.properties ./1046  docker cp acowbo-data-center-kafka:/opt/bitnami/kafka/config/log4j.properties ./1047  docker cp acowbo-data-center-kafka:/opt/bitnami/kafka/config/consumer.properties ./1048  docker cp acowbo-data-center-kafka:/opt/bitnami/kafka/config/producer.properties ./

直接挂载目录,不会将容器内配置文件同步到宿主机:Kafka 容器(如 confluentinc/cp-kafka)的行为可能不同。Kafka 不一定会自动将初始化数据写入挂载目录,尤其是数据文件。这通常是因为 Kafka 的设计目标不同,它可能依赖特定路径来存储数据,而不会主动写入宿主机目录。

docker-compose.yml创建

version: '3.8'services:# Zookeeper 服务zookeeper:# 使用 Bitnami Zookeeper 镜像image: bitnami/zookeeper:3.9container_name: acowbo-data-center-zookeeperenvironment:- ALLOW_ANONYMOUS_LOGIN=yes  # 允许匿名登录- TZ=Asia/shanghai- /etc/localtime:/etc/localtime:ro  # 映射宿主机时区设置ports:- "17111:2181"  # Zookeeper 服务监听端口networks:- acowbo-data-center  # 使用自定义网络# volumes:# 数据挂载:Zookeeper 数据和配置文件# - ./zookeeper/conf:/opt/bitnami/zookeeper/conf  # 挂载配置文件# - ./zookeeper/data:/bitnami/zookeeper  # 持久化数据restart: unless-stopped  # 如果容器退出,除非手动停止,否则自动重启# Kafka 服务kafka:# 使用 Bitnami Kafka 镜像image: bitnami/kafka:3.6.0container_name: acowbo-data-center-kafkaenvironment:- /etc/localtime:/etc/localtime:ro  # 映射宿主机时区设置- TZ=Asia/shanghai- KAFKA_OPTS=-Duser.timezone=Asia/Shanghai- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092  # Kafka 对外暴露的端口- KAFKA_LISTENER_SECURITY_PROTOCOL=PLAINTEXT  # 安全协议(默认:PLAINTEXT)- KAFKA_LISTENER_NAME=PLAINTEXT- KAFKA_LISTENER_PORT=9092  # Kafka 监听端口- KAFKA_ZOOKEEPER_CONNECT=acowbo-data-center-zookeeper:2181  # Kafka 连接到 Zookeeper- KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092- KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT  # Kafka 内部通讯端口配置ports:- "17112:9092"  # Kafka 服务对外暴露端口networks:- acowbo-data-center  # 使用自定义网络depends_on:- zookeeper  # Kafka 依赖 Zookeeper 启动volumes:# 数据挂载:Kafka 配置和持久化数据- ./kafka/conf/server.properties:/opt/bitnami/kafka/config/server.properties  # 挂载配置文件- ./kafka/conf/log4j.properties:/opt/bitnami/kafka/config/log4j.properties- ./kafka/conf/consumer.properties:/opt/bitnami/kafka/config/consumer.properties- ./kafka/conf/producer.properties:/opt/bitnami/kafka/config/producer.properties- ./kafka/conf/zookeeper.properties:/opt/bitnami/kafka/config/zookeeper.properties- ./kafka/logs:/opt/bitnami/kafka/logs  # 持久化数据- ./kafka/data:/bitnami/kafka/data  # 持久化数据# 容器重启策略restart: unless-stopped  # 如果容器退出,除非手动停止,否则自动重启# Redpanda Console 服务redpanda-console:# 使用 Redpanda Console 镜像image: redpandadata/console:latestcontainer_name: acowbo-data-center-redpanda-consoleenvironment:- /etc/localtime:/etc/localtime:ro  # 映射宿主机时区设置- TZ=Asia/shanghai- KAFKA_BROKERS=acowbo-data-center-kafka:9092  # 指定 Kafka 集群的地址ports:- "17113:8080"  # Redpanda Console 服务监听端口networks:- acowbo-data-center  # 使用自定义网络depends_on:- kafka  # Redpanda Console 依赖 Kafka 启动restart: unless-stopped  # 如果容器退出,除非手动停止,否则自动重启# 自定义网络配置
networks:acowbo-data-center:driver: bridge  # 使用 bridge 网络驱动name: acowbo-data-center

启动

在项目根目录下执行以下命令启动Docker Compose:

docker-compose up -d

此命令会下载所需的镜像并在后台启动Zookeeper与Kafka服务。

检查服务是否启动成功

使用以下命令检查容器状态:

docker ps
# 或者
docker-compose ps

确保zookeeperkafka容器都在运行中。如果没有,使用以下命令查看日志并排查问题:

docker logs <容器ID>

成功图

image-20250106102658399

集群搭建

挂载目录

image-20250106155813311

docker-compose.yml创建

version: '3.8'services:# Zookeeper 节点 1acowbo-data-center-zookeeper-one:image: bitnami/zookeeper:3.9container_name: acowbo-data-center-zookeeper-oneenvironment:- ALLOW_ANONYMOUS_LOGIN=yes               # 允许匿名登录- /etc/localtime:/etc/localtime:ro  # 映射宿主机时区设置- TZ=Asia/Shanghai                        # 设置时区- ZOO_MY_ID=1                   # 当前 Zookeeper 节点的唯一 ID- ZOO_SERVERS=acowbo-data-center-zookeeper-one:2888:3888,acowbo-data-center-zookeeper-two:2888:3888,acowbo-data-center-zookeeper-three:2888:3888ports:- "17121:2181"                            # Zookeeper 服务端口映射:宿主机17121 -> 容器内部2181networks:- acowbo-data-center                      # 使用自定义网络volumes:- ./zookeeper-one/data:/bitnami/zookeeper/data # 持久化数据目录(宿主机到容器)- ./zookeeper-one/logs:/bitnami/zookeeper/log   # 持久化日志目录- ./zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg   # 持久化日志目录restart: unless-stopped                     # 自动重启策略# Zookeeper 节点 2acowbo-data-center-zookeeper-two:image: bitnami/zookeeper:3.9container_name: acowbo-data-center-zookeeper-twoenvironment:- ALLOW_ANONYMOUS_LOGIN=yes- /etc/localtime:/etc/localtime:ro  # 映射宿主机时区设置- TZ=Asia/Shanghai- ZOO_MY_ID=2- ZOO_SERVERS=acowbo-data-center-zookeeper-one:2888:3888,acowbo-data-center-zookeeper-two:2888:3888,acowbo-data-center-zookeeper-three:2888:3888ports:- "17122:2181"                            # Zookeeper 服务端口映射:宿主机17122 -> 容器内部2181networks:- acowbo-data-centervolumes:- ./zookeeper-two/data:/bitnami/zookeeper/data- ./zookeeper-two/logs:/bitnami/zookeeper/log- ./zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfgrestart: unless-stopped# Zookeeper 节点 3acowbo-data-center-zookeeper-three:image: bitnami/zookeeper:3.9container_name: acowbo-data-center-zookeeper-threeenvironment:- ALLOW_ANONYMOUS_LOGIN=yes- /etc/localtime:/etc/localtime:ro  # 映射宿主机时区设置- TZ=Asia/Shanghai- ZOO_MY_ID=3- ZOO_SERVERS=acowbo-data-center-zookeeper-one:2888:3888,acowbo-data-center-zookeeper-two:2888:3888,acowbo-data-center-zookeeper-three:2888:3888ports:- "17123:2181"                            # Zookeeper 服务端口映射:宿主机17123 -> 容器内部2181networks:- acowbo-data-centervolumes:- ./zookeeper-three/data:/bitnami/zookeeper/data- ./zookeeper-three/logs:/bitnami/zookeeper/log- ./zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfgrestart: unless-stopped# Kafka 节点 1acowbo-data-center-kafka-one:image: bitnami/kafka:3.6.0container_name: acowbo-data-center-kafka-oneenvironment:- KAFKA_CFG_ZOOKEEPER_CONNECT=acowbo-data-center-zookeeper-one:2181,acowbo-data-center-zookeeper-two:2181,acowbo-data-center-zookeeper-three:2181- KAFKA_BROKER_ID=1                       # 当前 Kafka 节点的唯一 Broker ID- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://acowbo-data-center-kafka-one:9092- KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=PLAINTEXT- KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=3 # 分区副本数(与 Kafka 节点数量一致)- /etc/localtime:/etc/localtime:ro  # 映射宿主机时区设置- TZ=Asia/Shanghai                        # 时区设置ports:- "17124:9092"                            # Kafka 对外暴露的服务端口networks:- acowbo-data-centervolumes:- ./kafka-one/conf/server.properties:/opt/bitnami/kafka/config/server.properties- ./kafka-one/conf/log4j.properties:/opt/bitnami/kafka/config/log4j.properties- ./kafka-one/conf/consumer.properties:/opt/bitnami/kafka/config/consumer.properties- ./kafka-one/conf/producer.properties:/opt/bitnami/kafka/config/producer.properties- ./kafka-one/conf/zookeeper.properties:/opt/bitnami/kafka/config/zookeeper.properties- ./kafka-one/data:/bitnami/kafka/data       # 持久化数据目录- ./kafka-one/logs:/opt/bitnami/kafka/logs       # 持久化日志目录depends_on:- acowbo-data-center-zookeeper-one- acowbo-data-center-zookeeper-two- acowbo-data-center-zookeeper-threerestart: unless-stopped# Kafka 节点 2acowbo-data-center-kafka-two:image: bitnami/kafka:3.6.0container_name: acowbo-data-center-kafka-twoenvironment:- KAFKA_CFG_ZOOKEEPER_CONNECT=acowbo-data-center-zookeeper-one:2181,acowbo-data-center-zookeeper-two:2181,acowbo-data-center-zookeeper-three:2181- KAFKA_BROKER_ID=2- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://acowbo-data-center-kafka-two:9092- KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=PLAINTEXT- KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=3- /etc/localtime:/etc/localtime:ro  # 映射宿主机时区设置- TZ=Asia/Shanghaiports:- "17125:9092"networks:- acowbo-data-centervolumes:- ./kafka-two/conf/server.properties:/opt/bitnami/kafka/config/server.properties- ./kafka-two/conf/log4j.properties:/opt/bitnami/kafka/config/log4j.properties- ./kafka-two/conf/consumer.properties:/opt/bitnami/kafka/config/consumer.properties- ./kafka-two/conf/producer.properties:/opt/bitnami/kafka/config/producer.properties- ./kafka-two/conf/zookeeper.properties:/opt/bitnami/kafka/config/zookeeper.properties- ./kafka-two/data:/bitnami/kafka/data       # 持久化数据目录- ./kafka-two/logs:/opt/bitnami/kafka/logs       # 持久化日志目录depends_on:- acowbo-data-center-zookeeper-one- acowbo-data-center-zookeeper-two- acowbo-data-center-zookeeper-threerestart: unless-stopped# Kafka 节点 3acowbo-data-center-kafka-three:image: bitnami/kafka:3.6.0container_name: acowbo-data-center-kafka-threeenvironment:- KAFKA_CFG_ZOOKEEPER_CONNECT=acowbo-data-center-zookeeper-one:2181,acowbo-data-center-zookeeper-two:2181,acowbo-data-center-zookeeper-three:2181- KAFKA_BROKER_ID=3- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://acowbo-data-center-kafka-three:9092- KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=PLAINTEXT- KAFKA_CFG_OFFSETS_TOPIC_REPLICATION_FACTOR=3- /etc/localtime:/etc/localtime:ro  # 映射宿主机时区设置- TZ=Asia/Shanghaiports:- "17126:9092"networks:- acowbo-data-centervolumes:- ./kafka-three/conf/server.properties:/opt/bitnami/kafka/config/server.properties- ./kafka-three/conf/log4j.properties:/opt/bitnami/kafka/config/log4j.properties- ./kafka-three/conf/consumer.properties:/opt/bitnami/kafka/config/consumer.properties- ./kafka-three/conf/producer.properties:/opt/bitnami/kafka/config/producer.properties- ./kafka-three/conf/zookeeper.properties:/opt/bitnami/kafka/config/zookeeper.properties- ./kafka-three/data:/bitnami/kafka/data       # 持久化数据目录- ./kafka-three/logs:/opt/bitnami/kafka/logs       # 持久化日志目录depends_on:- acowbo-data-center-zookeeper-one- acowbo-data-center-zookeeper-two- acowbo-data-center-zookeeper-threerestart: unless-stopped# Kafka 可视化界面acowbo-data-center-redpanda-console:image: redpandadata/console:latestcontainer_name: acowbo-data-center-redpanda-console-clusterenvironment:- KAFKA_BROKERS=acowbo-data-center-kafka-one:9092,acowbo-data-center-kafka-two:9092,acowbo-data-center-kafka-three:9092- /etc/localtime:/etc/localtime:ro  # 映射宿主机时区设置- TZ=Asia/Shanghaiports:- "17127:8080"                            # Redpanda Console 对外暴露端口networks:- acowbo-data-centerdepends_on:- acowbo-data-center-kafka-one- acowbo-data-center-kafka-two- acowbo-data-center-kafka-threerestart: unless-stopped# 自定义网络配置
networks:acowbo-data-center:driver: bridge                               # 使用 bridge 网络驱动name: acowbo-data-center                    # 自定义网络名称

成功图

image-20250106160001470

全方位多角度填坑

单体

单体项目几乎没有坑,按照我的一步一步走就能到达彼岸

如果 Kafka 是单实例部署(即只有一个 Broker),分区数、复制因子和最小同步副本数的配置会受到架构限制。以下是针对单实例的推荐配置和说明:

1. 分区数(Partitions)

分区数决定了 Kafka 的并行处理能力和消息分发策略。

  • 单实例 Kafka 配置建议:

    • 单实例 Kafka 允许创建多个分区,但所有分区都分配在同一个 Broker 上。
    • 过多分区可能导致性能问题,因为所有分区的 I/O 和存储都会集中在一个 Broker 上。
  • 示例配置:

    --partitions 1
    
  • 业务建议:
    根据业务需求设置分区数。在单实例中,建议分区数量适中,避免资源过载。


2. 复制因子(Replication Factor)

复制因子用于设置每个分区的副本数量,以实现容错能力。

  • 单实例 Kafka 配置限制:

    • 因为只有一个 Broker,复制因子必须设置为 1
    • 复制因子为 1 意味着没有副本,因此在 Broker 宕机时无法恢复数据。
  • 示例配置:

    --replication-factor 1
    
  • 注意事项:
    如果需要高可用性,需增加 Broker 节点并提高复制因子。


3. 最小同步副本数(min.insync.replicas)

此参数决定了在事务性写入时,至少需要多少副本同步成功才能确认消息写入。

  • 单实例 Kafka 配置:

    • 因为只有主分区,没有其他副本,min.insync.replicas 必须设置为 1
  • 示例配置(Kafka 配置文件):

    min.insync.replicas=1
    
  • 注意事项:

    • 在单实例 Kafka 中,生产者的 acks=all 配置只会等待主分区的写入确认。
    • 如果需要更高的容错能力,需要增加 Broker 和副本数量。

集群

关于zk的问题

即使你看到容器都正常启动了,不一定就是正确的,使用可视化工具如果查看到只有一个broker,大概率是zk集群搭建失败了,以单体搭建了

  1. 可以先进入容器查看zk是否是单节点

    # 进入容器,其中acowbo-data-center-zookeeper-three是容器名称,这里用容器id也是可以的
    docker exec -it acowbo-data-center-zookeeper-three bash
    # 查看状态
    zkServer.sh status
    

    image-20250106161152263

  2. 直接使用命令查看

    # 进入命令行
    docker exec -it acowbo-data-center-zookeeper-three zkCli.sh
    # 验证 Kafka Broker 是否正确注册到 Zookeeper 上
    ls /brokers/ids
    

    image-20250106161331466

这里注意zk搭建的关键步骤

  1. Myid一定是不同的

    image-20250106162033437

  2. 挂载的配置文件内容是一样的(server)

    image-20250106162120944

满足以上两步就应该没问题了

关于kafka的问题

这里大概率是因为之前的zk没配置好,挂载在kafkadata中的数据用了之前的zk信息导致的无法启动

删除挂载data就可以了,报错如下

01-06 15:33:53,808] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID eqJbCiT-TF2MQpYstOEkIg doesn't match stored clusterId Some(y8OMXK9JSDWdgbu7A8TBqg) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.at kafka.server.KafkaServer.startup(KafkaServer.scala:243)at kafka.Kafka$.main(Kafka.scala:113)at kafka.Kafka.main(Kafka.scala)
[2025-01-06 15:33:53,811] INFO shutting down (kafka.server.KafkaServer)
[2025-01-06 15:33:53,815] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2025-01-06 15:33:53,922] INFO Session: 0x1016399042e000f closed (org.apache.zookeeper.ZooKeeper)
[2025-01-06 15:33:53,922] INFO EventThread shut down for session: 0x1016399042e000f (org.apache.zookeeper.ClientCnxn)
[2025-01-06 15:33:53,925] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2025-01-06 15:33:53,929] INFO App info kafka.server for 1 unregistered (org.apache.kafka.common.utils.AppInfoParser)
[2025-01-06 15:33:53,930] INFO shut down completed (kafka.server.KafkaServer)
[2025-01-06 15:33:53,930] ERROR Exiting Kafka due to fatal exception during startup. (kafka.Kafka$)
kafka.common.InconsistentClusterIdException: The Cluster ID eqJbCiT-TF2MQpYstOEkIg doesn't match stored clusterId Some(y8OMXK9JSDWdgbu7A8TBqg) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.at kafka.server.KafkaServer.startup(KafkaServer.scala:243)at kafka.Kafka$.main(Kafka.scala:113)at kafka.Kafka.main(Kafka.scala)
[2025-01-06 15:33:53,931] INFO shutting down (kafka.server.KafkaServer)

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com