bug起源
因为业务量不多,kafka服务器配置过高,故而降低aws上kafka服务器的配置,降配重启以后发现ip 外网的ip变了,但是kafka配置的KAFKA_ADVERTISED_LISTENERS用的是域名,所以直接更改了域名的解析ip,启动kafka docker,启动起来!
但是事实是,kafka是启动起来了,但是业务无法读取kafka信息
查看kafka内的数据 发现KAFKA_LOG_DIRS又形成了新的文件夹,并且文件夹中的meta.properties内的broker.id= 002 和 cluster id都变了,实际是形成了新的kafka集群 并没有读取旧的数据
解决方案
1、设定docker-compose.yml 中 KAFKA_LOG_DIRS参数,指定旧版的log-dir
2、复制旧版的log-dir里的meta.properties做备份,避免改出问题,修改meta.properties中的broker.id 1002改成旧的1001,
修改新meta.properties中cluster.id 为新log-dir中的那个cluster.id 不然会报错,显示kafka cluster id doesn’t match
3、重启docker kafka 即可
参考文档:
一个kafka的辛酸填坑路
The Cluster ID xxx doesn’t match stored clusterId Some(xxx) in meta.properties. The broker is trying
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57
| version: "3" services:
zookeeper: image: wurstmeister/zookeeper container_name: micro_service-zookeeper ports: - "2181:2181" restart: always networks: - kafka-zookeeper ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65535 hard: 65535
kafka: image: wurstmeister/kafka container_name: micro_service-kafka ports: - "9092:9092" depends_on: - zookeeper environment: KAFKA_LISTENERS: DOCKER://0.0.0.0:19092,HOST://0.0.0.0:9092 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_INTER_BROKER_LISTENER_NAME: DOCKER KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: DOCKER:PLAINTEXT,HOST:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: DOCKER://kafka:19092,HOST://${HOSTNAME_COMMAND}:9092 KAFKA_LOG_DIRS: /kafka/kafka-logs-7ac76778470f volumes: - /var/run/docker.sock:/var/run/docker.sock - /data/kafka/kafka-data:/kafka:rw restart: always networks: - kafka-zookeeper # deploy: # resources: # limits: # cpus: '0.50' # memory: 4G # reservations: # cpus: '0.25' # memory: 512M ulimits: memlock: soft: -1 hard: -1 nofile: soft: 65535 hard: 65535 networks: kafka-zookeeper: driver: bridge
|
在.env中设置
HOSTNAME_COMMAND=xxx.xxxxx.com