Skip to content
This repository has been archived by the owner on Jan 24, 2024. It is now read-only.

[BUG] KStreams doesn't work with KoP #1743

Closed
asafm opened this issue Feb 28, 2023 · 16 comments
Closed

[BUG] KStreams doesn't work with KoP #1743

asafm opened this issue Feb 28, 2023 · 16 comments
Assignees
Labels

Comments

@asafm
Copy link
Contributor

asafm commented Feb 28, 2023

Describe the bug
I get the following exception when using an example of KStreams, based on their official Kafka Streams Example repository.

When I run the file, I get the following exception:

[2023-02-28 11:51:36,747] ERROR [anomaly-detection-lambda-example-client-StreamThread-1] stream-client [anomaly-detection-lambda-example-client] Encountered the following exception during processing and Kafka Streams opted to SHUTDOWN_CLIENT. The streams client is going to shut down now.  (org.apache.kafka.streams.KafkaStreams)
org.apache.kafka.streams.errors.StreamsException: Could not create topic anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition, because brokers don't support configuration replication.factor=-1. You can change the replication.factor config or upgrade your brokers to version 2.4 or newer to avoid this error.
	at org.apache.kafka.streams.processor.internals.InternalTopicManager.makeReady(InternalTopicManager.java:463)
	at org.apache.kafka.streams.processor.internals.RepartitionTopics.setup(RepartitionTopics.java:73)
	at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.prepareRepartitionTopics(StreamsPartitionAssignor.java:504)
	at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assign(StreamsPartitionAssignor.java:383)
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:640)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:694)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1000(AbstractCoordinator.java:112)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:598)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:561)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1196)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1171)
	at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:206)
	at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:169)
	at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:129)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:602)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:412)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
	at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1297)
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1238)
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
	at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:980)
	at org.apache.kafka.streams.processor.internals.StreamThread.pollPhase(StreamThread.java:933)
	at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:751)
	at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:604)
	at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:576)

To Reproduce
Steps to reproduce the behavior:

  1. Run docker compose up -d using the follow docker-compose file
version: '3'
networks:
  pulsar:
    driver: bridge
services:
  # Start zookeeper
  zookeeper:
    image: streamnative/sn-pulsar:2.10.3.4
    container_name: zookeeper
    restart: "no"
    networks:
      - pulsar
    volumes:
      - ./data/zookeeper:/pulsar/data/zookeeper
    environment:
      - metadataStoreUrl=zk:zookeeper:2181
      - PULSAR_MEM=-Xms256m -Xmx256m -XX:MaxDirectMemorySize=256m
    command: >
      bash -c "bin/apply-config-from-env.py conf/zookeeper.conf && \
             bin/generate-zookeeper-config.sh conf/zookeeper.conf && \
             exec bin/pulsar zookeeper"
    healthcheck:
      test: ["CMD", "bin/pulsar-zookeeper-ruok.sh"]
      interval: 10s
      timeout: 5s
      retries: 30
    ports:
      - "2181:2181"

  # Init cluster metadata
  pulsar-init:
    container_name: pulsar-init
    hostname: pulsar-init
    image: streamnative/sn-pulsar:2.10.3.4
    networks:
      - pulsar
    command: >
      bin/pulsar initialize-cluster-metadata \
               --cluster cluster-a \
               --zookeeper zookeeper:2181 \
               --configuration-store zookeeper:2181 \
               --web-service-url http://broker:8080 \
               --broker-service-url pulsar://broker:6650
    depends_on:
      zookeeper:
        condition: service_healthy

  # Start bookie
  bookie:
    image: streamnative/sn-pulsar:2.10.3.4
    container_name: bookie
    restart: "no"
    networks:
      - pulsar
    environment:
      - clusterName=cluster-a
      - zkServers=zookeeper:2181
      - metadataServiceUri=metadata-store:zk:zookeeper:2181
      # otherwise every time we run docker compose uo or down we fail to start due to Cookie
      - advertisedAddress=bookie
      - BOOKIE_MEM=-Xms512m -Xmx512m -XX:MaxDirectMemorySize=256m
    depends_on:
      zookeeper:
        condition: service_healthy
      pulsar-init:
        condition: service_completed_successfully
    # Map the local directory to the container to avoid bookie startup failure due to insufficient container disks.
    volumes:
      - ./data/bookkeeper:/pulsar/data/bookkeeper
    command: bash -c "bin/apply-config-from-env.py conf/bookkeeper.conf
      && exec bin/pulsar bookie"

  # Start broker
  broker:
    image: streamnative/sn-pulsar:2.10.3.4
    container_name: broker
    hostname: broker
    restart: "no"
    networks:
      - pulsar
    environment:
      - PULSAR_MEM=-Xms512m -Xmx512m -XX:MaxDirectMemorySize=256m
      - PULSAR_PREFIX_metadataStoreUrl=zk:zookeeper:2181
      - PULSAR_PREFIX_zookeeperServers=zookeeper:2181
      - PULSAR_PREFIX_clusterName=cluster-a
      - PULSAR_PREFIX_managedLedgerDefaultEnsembleSize=1
      - PULSAR_PREFIX_managedLedgerDefaultWriteQuorum=1
      - PULSAR_PREFIX_managedLedgerDefaultAckQuorum=1
      - PULSAR_PREFIX_advertisedAddress=broker
      - PULSAR_PREFIX_advertisedListeners=external:pulsar://127.0.0.1:6650
      # KoP
      - PULSAR_PREFIX_messagingProtocols=kafka
      - PULSAR_PREFIX_allowAutoTopicCreationType=partitioned
      - PULSAR_PREFIX_kafkaListeners=PLAINTEXT://0.0.0.0:9092
      - PULSAR_PREFIX_kafkaAdvertisedListeners=PLAINTEXT://127.0.0.1:9092
      - PULSAR_PREFIX_brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor
      - PULSAR_PREFIX_brokerDeleteInactiveTopicsEnabled=false
      #- PULSAR_PREFIX_kopSchemaRegistryEnable=true
    depends_on:
      zookeeper:
        condition: service_healthy
      bookie:
        condition: service_started
    ports:
      - "6650:6650"
      - "8080:8080"
      #- "8001:8001" # what is this for?
      - "9092:9092"
    command: bash -c "bin/apply-config-from-env.py conf/broker.conf &&  exec bin/pulsar broker"
  1. Once broker is up, create 2 topics per the instructions for the example, using Kafka 2.8
./kafka-topics.sh --create --topic UserClicks --partitions 1 --replication-factor 1 --bootstrap-server localhost:9092
./kafka-topics.sh --create --topic AnomalousUsers --partitions 1 --replication-factor 1 --bootstrap-server localhost:9092
  1. Clone https://github.com/confluentinc/kafka-streams-examples
  2. Run from terminal
mvn -Dskip.tests=true compile
  1. Load project in IntelliJ and run AnomalyDetectionLambdaExample

Expected behavior
Should work

@asafm asafm added the type/bug label Feb 28, 2023
@BewareMyPower BewareMyPower self-assigned this Mar 1, 2023
@BewareMyPower
Copy link
Collaborator

It's weird that I tried adding a test to KoP but it passed.

diff --git a/tests/src/test/java/io/streamnative/pulsar/handlers/kop/streams/AnomalyDetectionLambdaTest.java b/tests/src/test/java/io/streamnative/pulsar/handlers/kop/streams/AnomalyDetectionLambdaTest.java
new file mode 100644
index 0000000..63194a0
--- /dev/null
+++ b/tests/src/test/java/io/streamnative/pulsar/handlers/kop/streams/AnomalyDetectionLambdaTest.java
@@ -0,0 +1,112 @@
+/**
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package io.streamnative.pulsar.handlers.kop.streams;
+
+import java.time.Duration;
+import java.util.Collections;
+import java.util.Properties;
+import lombok.NonNull;
+import lombok.extern.slf4j.Slf4j;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.consumer.ConsumerRecords;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.common.serialization.LongDeserializer;
+import org.apache.kafka.common.serialization.Serde;
+import org.apache.kafka.common.serialization.Serdes;
+import org.apache.kafka.streams.KafkaStreams;
+import org.apache.kafka.streams.KeyValue;
+import org.apache.kafka.streams.kstream.KStream;
+import org.apache.kafka.streams.kstream.KTable;
+import org.apache.kafka.streams.kstream.Produced;
+import org.apache.kafka.streams.kstream.TimeWindows;
+import org.apache.kafka.streams.kstream.Windowed;
+import org.testng.annotations.Test;
+
+@Slf4j
+public class AnomalyDetectionLambdaTest extends KafkaStreamsTestBase {
+
+    @Override
+    protected void createTopics() throws Exception {
+        admin.topics().createPartitionedTopic("UserClicks", 1);
+        admin.topics().createPartitionedTopic("AnomalousUsers", 1);
+    }
+
+    @Override
+    protected @NonNull String getApplicationIdPrefix() {
+        return "anomaly-detection-lambda-example";
+    }
+
+    @Override
+    protected void extraSetup() throws Exception {
+        // No ops
+    }
+
+    @Override
+    protected Class<?> getKeySerdeClass() {
+        return Serdes.String().getClass();
+    }
+
+    @Override
+    protected Class<?> getValueSerdeClass() {
+        return Serdes.String().getClass();
+    }
+
+    @Test
+    public void test() throws Exception {
+        final Serde<String> stringSerde = Serdes.String();
+        final Serde<Long> longSerde = Serdes.Long();
+
+        final KStream<String, String> views = builder.stream("UserClicks");
+        final KTable<Windowed<String>, Long> anomalousUsers = views
+                .map((ignoredKey, username) -> new KeyValue<>(username, username))
+                .groupByKey()
+                .windowedBy(TimeWindows.of(Duration.ofMinutes(1)))
+                .count()
+                .filter((windowedUserId, count) -> count >= 3);
+        final KStream<String, Long> anomalousUsersForConsole = anomalousUsers
+                .toStream()
+                .filter((windowedUserId, count) -> count != null)
+                .map((windowedUserId, count) -> new KeyValue<>(windowedUserId.toString(), count));
+
+        anomalousUsersForConsole.to("AnomalousUsers", Produced.with(stringSerde, longSerde));
+
+        anomalousUsersForConsole.to("AnomalousUsers", Produced.with(stringSerde, longSerde));
+
+        final KafkaStreams streams = new KafkaStreams(builder.build(), streamsConfiguration);
+        streams.cleanUp();
+        streams.start();
+
+        final Properties consumerProps = newKafkaConsumerProperties();
+        consumerProps.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName());
+        final KafkaConsumer<String, Long> consumer = new KafkaConsumer<>(consumerProps);
+        consumer.subscribe(Collections.singleton("AnomalousUsers"));
+
+        final KafkaProducer<String, String> producer = new KafkaProducer<>(newKafkaProducerProperties());
+        final String[] values = {"alice", "alice", "bob", "alice", "alice", "charlie"};
+        for (String value : values) {
+            producer.send(new ProducerRecord<>("UserClicks", value)).get();
+        }
+
+        int i = 0;
+        while (i < 4) {
+            final ConsumerRecords<String, Long> records = consumer.poll(Duration.ofSeconds(1));
+            records.forEach(record -> log.info("XYZ received {}", record.value()));
+            i += records.count();
+        }
+
+        streams.close();
+    }
+}
diff --git a/tests/src/test/java/io/streamnative/pulsar/handlers/kop/streams/KafkaStreamsTestBase.java b/tests/src/test/java/io/streamnative/pulsar/handlers/kop/streams/KafkaStreamsTestBase.java
index 88a0ad7..b79cb44 100644
--- a/tests/src/test/java/io/streamnative/pulsar/handlers/kop/streams/KafkaStreamsTestBase.java
+++ b/tests/src/test/java/io/streamnative/pulsar/handlers/kop/streams/KafkaStreamsTestBase.java
@@ -36,7 +36,7 @@ public abstract class KafkaStreamsTestBase extends KopProtocolHandlerTestBase {
     protected String bootstrapServers;
     @Getter
     private int testNo = 0; // the suffix of the prefix of test topic name or application id, etc.
-    private Properties streamsConfiguration;
+    protected Properties streamsConfiguration;
     protected StreamsBuilder builder; // the builder to build `kafkaStreams` and other objects of Kafka Streams
     protected KafkaStreams kafkaStreams;

I will test 2.10.3.4 soon.

@BewareMyPower
Copy link
Collaborator

BewareMyPower commented Mar 2, 2023

Then I found the error described in this issue:

Could not create topic anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition, because brokers don't support configuration replication.factor=-1. You can change the replication.factor config or upgrade your brokers to version 2.4 or newer to avoid this error.

From https://kafka.apache.org/33/documentation/streams/developer-guide/config-streams#replication-factor-parm we can see we need to set replication.factor to 3 for Kafka broker version 2.3 or older, which does not support replication.factor with -1.

KoP upgrades the kafka-clients dependency to 2.8.0 in #1588, which was only cherry-picked to branch-2.11 because this is a huge PR that changes code a lot. So in branch-2.10.x and earlier, the kafka-clients dependency is 2.0.0, which cannot parse replication.factor as -1.

There are two workarounds:

  1. Upgrade KoP to 2.11.0+, we have a 2.11.0.1 image now.
  2. Downgrade the Kafka Streams to 2.3 or older. The replication factor is 1 for these versions.

@asafm
Copy link
Contributor Author

asafm commented Apr 17, 2023

I upgraded to 2.11.0.4, and got this:

[2023-04-17 14:00:29,968] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 928 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)

In broker logs

2023-04-17 14:00:29 2023-04-17T11:00:29,955+0000 [pulsar-web-38-2] INFO  org.eclipse.jetty.server.RequestLog - 172.19.0.4 - - [17/Apr/2023:11:00:29 +0000] "GET /admin/v2/persistent/public/default/UserClicks/partitions HTTP/1.1" 200 16 "-" "Pulsar-Java-v2.11.0.4" 14
2023-04-17 14:00:29 2023-04-17T11:00:29,960+0000 [pulsar-web-38-5] INFO  org.eclipse.jetty.server.RequestLog - 172.19.0.4 - - [17/Apr/2023:11:00:29 +0000] "GET /admin/v2/persistent/public/default/anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition/partitions HTTP/1.1" 404 142 "-" "Pulsar-Java-v2.11.0.4" 20
2023-04-17 14:00:29 2023-04-17T11:00:29,963+0000 [AsyncHttpClient-61-1] ERROR io.streamnative.pulsar.handlers.kop.KafkaRequestHandler - [[id: 0x5f6d8af4, L:/172.19.0.4:9092 - R:/172.19.0.1:48530]] Topic persistent://public/default/anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition doesn't exist and it's not allowed to auto create partitioned topic
2023-04-17 14:00:29 org.apache.pulsar.client.admin.PulsarAdminException$NotFoundException: Topic persistent://public/default/anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition not found
2023-04-17 14:00:29     at org.apache.pulsar.client.admin.internal.BaseResource.getApiException(BaseResource.java:248) ~[io.streamnative-pulsar-client-admin-original-2.11.0.4.jar:2.11.0.4]
2023-04-17 14:00:29     at org.apache.pulsar.client.admin.internal.BaseResource$FutureCallback.failed(BaseResource.java:337) ~[io.streamnative-pulsar-client-admin-original-2.11.0.4.jar:2.11.0.4]
2023-04-17 14:00:29     at org.glassfish.jersey.client.JerseyInvocation$1.failed(JerseyInvocation.java:882) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.client.JerseyInvocation$1.completed(JerseyInvocation.java:863) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:229) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.client.ClientRuntime.access$200(ClientRuntime.java:62) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.client.ClientRuntime$2.lambda$response$0(ClientRuntime.java:173) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.internal.Errors.process(Errors.java:292) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.internal.Errors.process(Errors.java:274) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.internal.Errors.process(Errors.java:244) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:288) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.client.ClientRuntime$2.response(ClientRuntime.java:173) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
2023-04-17 14:00:29     at org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector.lambda$apply$1(AsyncHttpConnector.java:251) ~[io.streamnative-pulsar-client-admin-original-2.11.0.4.jar:2.11.0.4]
2023-04-17 14:00:29     at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) ~[?:?]
2023-04-17 14:00:29     at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) ~[?:?]
2023-04-17 14:00:29     at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
2023-04-17 14:00:29     at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[?:?]
2023-04-17 14:00:29     at org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector.lambda$retryOperation$4(AsyncHttpConnector.java:293) ~[io.streamnative-pulsar-client-admin-original-2.11.0.4.jar:2.11.0.4]
2023-04-17 14:00:29     at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863) ~[?:?]
2023-04-17 14:00:29     at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841) ~[?:?]
2023-04-17 14:00:29     at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
2023-04-17 14:00:29     at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) ~[?:?]
2023-04-17 14:00:29     at org.asynchttpclient.netty.NettyResponseFuture.loadContent(NettyResponseFuture.java:222) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
2023-04-17 14:00:29     at org.asynchttpclient.netty.NettyResponseFuture.done(NettyResponseFuture.java:257) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
2023-04-17 14:00:29     at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.finishUpdate(AsyncHttpClientHandler.java:241) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
2023-04-17 14:00:29     at org.asynchttpclient.netty.handler.HttpHandler.handleChunk(HttpHandler.java:114) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
2023-04-17 14:00:29     at org.asynchttpclient.netty.handler.HttpHandler.handleRead(HttpHandler.java:143) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
2023-04-17 14:00:29     at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.channelRead(AsyncHttpClientHandler.java:78) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) ~[io.netty-netty-codec-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) ~[io.netty-netty-codec-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) ~[io.netty-netty-codec-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[io.netty-netty-transport-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[io.netty-netty-common-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[io.netty-netty-common-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[io.netty-netty-common-4.1.86.Final.jar:4.1.86.Final]
2023-04-17 14:00:29     at java.lang.Thread.run(Thread.java:833) ~[?:?]
2023-04-17 14:00:29 Caused by: javax.ws.rs.NotFoundException: HTTP 404 Topic persistent://public/default/anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition not found
2023-04-17 14:00:29     at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:948) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
2023-04-17 14:00:29     at org.glassfish.jersey.client.JerseyInvocation.access$700(JerseyInvocation.java:82) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
2023-04-17 14:00:29     ... 54 more

Docker compose file:

version: '3'
networks:
  pulsar:
    driver: bridge
services:
  # Start zookeeper
  zookeeper:
    image: streamnative/sn-pulsar:2.11.0.4
    container_name: zookeeper
    restart: "no"
    networks:
      - pulsar
    volumes:
      - ./data/zookeeper:/pulsar/data/zookeeper
    environment:
      - metadataStoreUrl=zk:zookeeper:2181
      - PULSAR_MEM=-Xms256m -Xmx256m -XX:MaxDirectMemorySize=256m
    command: >
      bash -c "bin/apply-config-from-env.py conf/zookeeper.conf && \
             bin/generate-zookeeper-config.sh conf/zookeeper.conf && \
             exec bin/pulsar zookeeper"
    healthcheck:
      test: ["CMD", "bin/pulsar-zookeeper-ruok.sh"]
      interval: 10s
      timeout: 5s
      retries: 30
    ports:
      - "2181:2181"

  # Init cluster metadata
  pulsar-init:
    container_name: pulsar-init
    hostname: pulsar-init
    image: streamnative/sn-pulsar:2.11.0.4
    networks:
      - pulsar
    command: >
      bin/pulsar initialize-cluster-metadata \
               --cluster cluster-a \
               --zookeeper zookeeper:2181 \
               --configuration-store zookeeper:2181 \
               --web-service-url http://broker:8080 \
               --broker-service-url pulsar://broker:6650
    depends_on:
      zookeeper:
        condition: service_healthy

  # Start bookie
  bookie:
    image: streamnative/sn-pulsar:2.11.0.4
    container_name: bookie
    restart: "no"
    networks:
      - pulsar
    environment:
      - clusterName=cluster-a
      - zkServers=zookeeper:2181
      - metadataServiceUri=metadata-store:zk:zookeeper:2181
      # otherwise every time we run docker compose uo or down we fail to start due to Cookie
      - advertisedAddress=bookie
      - BOOKIE_MEM=-Xms512m -Xmx512m -XX:MaxDirectMemorySize=256m
    depends_on:
      zookeeper:
        condition: service_healthy
      pulsar-init:
        condition: service_completed_successfully
    # Map the local directory to the container to avoid bookie startup failure due to insufficient container disks.
    volumes:
      - ./data/bookkeeper:/pulsar/data/bookkeeper
    command: bash -c "bin/apply-config-from-env.py conf/bookkeeper.conf
      && exec bin/pulsar bookie"

  # Start broker
  broker:
    image: streamnative/sn-pulsar:2.11.0.4
    container_name: broker
    hostname: broker
    restart: "no"
    networks:
      - pulsar
    environment:
      - PULSAR_MEM=-Xms512m -Xmx512m -XX:MaxDirectMemorySize=256m
      - PULSAR_PREFIX_metadataStoreUrl=zk:zookeeper:2181
      - PULSAR_PREFIX_zookeeperServers=zookeeper:2181
      - PULSAR_PREFIX_clusterName=cluster-a
      - PULSAR_PREFIX_managedLedgerDefaultEnsembleSize=1
      - PULSAR_PREFIX_managedLedgerDefaultWriteQuorum=1
      - PULSAR_PREFIX_managedLedgerDefaultAckQuorum=1
      - PULSAR_PREFIX_advertisedAddress=broker
      - PULSAR_PREFIX_advertisedListeners=external:pulsar://127.0.0.1:6650
      # KoP
      - PULSAR_PREFIX_messagingProtocols=kafka
      - PULSAR_PREFIX_allowAutoTopicCreationType=partitioned
      - PULSAR_PREFIX_kafkaListeners=PLAINTEXT://0.0.0.0:9092
      - PULSAR_PREFIX_kafkaAdvertisedListeners=PLAINTEXT://127.0.0.1:9092
      - PULSAR_PREFIX_brokerEntryMetadataInterceptors=org.apache.pulsar.common.intercept.AppendIndexMetadataInterceptor
      - PULSAR_PREFIX_brokerDeleteInactiveTopicsEnabled=false
      #- PULSAR_PREFIX_kopSchemaRegistryEnable=true
    depends_on:
      zookeeper:
        condition: service_healthy
      bookie:
        condition: service_started
    ports:
      - "6650:6650"
      - "8080:8080"
      #- "8001:8001" # what is this for?
      - "9092:9092"
    command: bash -c "bin/apply-config-from-env.py conf/broker.conf &&  exec bin/pulsar broker"

@BewareMyPower
Copy link
Collaborator

Okay, I will verify 2.11.0.4 as well.

@BewareMyPower
Copy link
Collaborator

BewareMyPower commented Apr 18, 2023

Yeah, the error logs will be generated. But it seems that the Kafka Streams example still works well?

$ bin/kafka-topics.sh --create --topic UserClicks \
    --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
Created topic UserClicks.
$ bin/kafka-topics.sh --create --topic AnomalousUsers \
    --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
Created topic AnomalousUsers.
$ bin/kafka-console-consumer.sh --topic AnomalousUsers --from-beginning \
    --bootstrap-server localhost:9092 \
    --property print.key=true \
    --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
[alice@1681801260000/1681801320000]     4
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic UserClicks
>alice
>alice
>bob
>alice
>alice
>charlie
>^C

@BewareMyPower
Copy link
Collaborator

2023-04-18T15:09:17,780+0800 [pulsar-ph-kafka-89-7] DEBUG io.streamnative.pulsar.handlers.kop.KafkaCommandDecoder - [/172.22.48.1:10133] Received kafka cmd RequestHeader(apiKey=METADATA, apiVersion=11, clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, correlationId=2), the request content is: KafkaHeaderAndRequest(header=RequestHeader(apiKey=METADATA, apiVersion=11, clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, correlationId=2), request=MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='UserClicks'), MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false), remoteAddress=/172.22.48.1:10133)

It seems to be related to the topic_id field, which was introduced in Kafka's Metadata request v10

I'm going to look deeper into this issue

@BewareMyPower
Copy link
Collaborator

This issue is not related to the topic id. These topics will be created eventually. You can run the following commands via Kafka's CLI:

bin/kafka-topics.sh --create --topic UserClicks \
    --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
bin/kafka-topics.sh --create --topic AnomalousUsers \
    --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1

NOTE: You should enable KoP's transaction for Kafka CLI 3.1.0 or later.

Then, run the Kafka Streams application, after the following logs appear:

2023-04-18 18:35:38,598 INFO  anomaly-detection-lambda-example-client-StreamThread-1id [org.apache.kafka.streams.processor.internals.TaskManager] - stream-thread [anomaly-detection-lambda-example-client-StreamThread-1] Handle new assignment with:
	New active tasks: [1_0, 0_0]
	New standby tasks: []
	Existing active tasks: [1_0, 0_0]
	Existing standby tasks: []

You will see the topics are created.

$ bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
AnomalousUsers
UserClicks
anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-changelog
anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition

Even with the Kafka server , you will still see the following warning logs at the client side.

2023-04-18 18:21:47,615 WARN anomaly-detection-lambda-example-client-StreamThread-1id [org.apache.kafka.clients.NetworkClient] - [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 33 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION}

However, there are no error logs in Kafka's logs because Kafka does not print any error log if the requested topic does not exist.

With KoP, more error logs appeared at the client side like:

2023-04-18 18:35:35,012 INFO anomaly-detection-lambda-example-client-StreamThread-1id [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Request joining group due to: rebalance failed due to 'The coordinator is loading and hence can't process requests.' (CoordinatorLoadInProgressException)
2023-04-18 18:35:35,012 INFO anomaly-detection-lambda-example-client-StreamThread-1id [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] (Re-)joining group

That's because the default groupInitialRebalanceDelayMs is 3000 ms in KoP, see https://github.com/streamnative/kop/blob/master/docs/configuration.md#group-coordinator.

The default value of the equivalent group.initial.rebalance.delay.ms config is also 3000 in Kafka. However, the example config in Kafka changes it to 0 for quicker rebalance. You can find the comments in config/server.properties in Kafka:

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

@asafm
Copy link
Contributor Author

asafm commented May 1, 2023

1st thing I want to say:

I tried it with Kafka, and it just worked:

  • Only 3 WARN when you run AnomalyDetectionLambdaExample
  • No WARN when you run kafka-producer.sh to write messages.
  • No WARN when. you run kafka-consumer.sh to write messages.

How did I do it?
I switched to branch 7.3.3-post

  1. Grab Docker Compose to run Kafka:
curl --silent --output docker-compose.yml \
  https://raw.githubusercontent.com/confluentinc/cp-all-in-one/7.3.3-post/cp-all-in-one/docker-compose.yml
  1. Run it
docker compose up -d
  1. Followed instructions at Javadoc of AnomalyDetectionLambdaExample
  2. run AnomalyDetectionLambdaExample from IntelliJ (after ./mvnw clean install -DskipTests)

Only 3 WARN appeared
It worked.

@BewareMyPower
Copy link
Collaborator

I think warning logs should not be treated as a bug. And after #1801, KoP won't print warning or error logs when receiving metadata requests for topics that do not exist.

You mentioned "it worked" for Kafka multiple times. However, how could you verify it did not work for KoP? Do you think warning logs means "not worked"?

BTW, could you check my following comment before?

The default value of the equivalent group.initial.rebalance.delay.ms config is also 3000 in Kafka. However, the example config in Kafka changes it to 0 for quicker rebalance.

To avoid the affect by this config, the config/server.properties in Kafka changes it to 0.

@asafm
Copy link
Contributor Author

asafm commented May 14, 2023

Sorry for taking so long. Last time I got stuck trying to start Pulsar locally. I'm now trying it out from scratch.

Can you please explain that step you wrote, which follows topic creation?

Then, run the Kafka Streams application, after the following logs appear:

2023-04-18 18:35:38,598 INFO  anomaly-detection-lambda-example-client-StreamThread-1id [org.apache.kafka.streams.processor.internals.TaskManager] - stream-thread [anomaly-detection-lambda-example-client-StreamThread-1] Handle new assignment with:
	New active tasks: [1_0, 0_0]
	New standby tasks: []
	Existing active tasks: [1_0, 0_0]
	Existing standby tasks: []

Why do I need to wait for this log, compared to Kafka, which I don't need to wait for something after I create the topics?

I have searched broker logs and have not found that line after topic creation.

I will address everything you wrote.

@asafm
Copy link
Contributor Author

asafm commented May 14, 2023

Ok, now it is working!

I have modified my docker-compose-cluster.yaml to be exactly like the one in kop (I think the main change was version upgrade to streamnative/sn-pulsar:2.11.1.0).

The only thing that different compared to Kafka is the WARN from the client

[2023-05-14 17:59:14,610] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1333 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)

This WARN appears many times. Here is the log for it:

[2023-05-14 17:58:59,269] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 2 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:00,797] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 7 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:00,985] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 66 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:01,140] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 97 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:01,322] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 118 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:01,497] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 144 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:01,675] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 173 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:03,380] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 199 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:03,868] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 229 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:04,099] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 259 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:04,898] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 277 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:06,564] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 298 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:07,238] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 616 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:07,427] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 685 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:08,346] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 706 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:08,698] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 749 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:08,938] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 760 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:10,101] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 776 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:10,606] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 904 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:10,756] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1008 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:10,907] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1036 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:11,058] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1055 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:11,205] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1079 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:11,360] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1105 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:11,515] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1118 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:11,656] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1147 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:11,833] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1167 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:11,986] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1203 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:12,137] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1236 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:12,292] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1277 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:12,432] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1309 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:12,566] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1319 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:12,700] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1320 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:12,844] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1321 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:12,980] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1322 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:13,134] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1323 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:13,275] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1324 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:13,412] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1325 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:13,571] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1326 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:13,735] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1327 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:13,877] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1328 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:14,026] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1329 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:14,170] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1330 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:14,311] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1331 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:14,464] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1332 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:14,610] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1333 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:14,743] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1334 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:14,882] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1335 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:15,012] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1336 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:15,146] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1337 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:15,275] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1338 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:15,411] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1339 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:15,558] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1340 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 17:59:15,690] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1341 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)

This is how it looks like when running the example, against Kafka:

[2023-05-14 18:11:48,635] WARN [main] Using an OS temp directory in the state.dir property can cause failures with writing the checkpoint file due to the fact that this directory can be cleared by the OS. Resolved state.dir: [/var/folders/kc/tw2ty9r11f34925hs5ff_3yh0000gn/T//kafka-streams] (org.apache.kafka.streams.processor.internals.StateDirectory)
[2023-05-14 18:11:48,855] WARN [main] Error while loading kafka-streams-version.properties (org.apache.kafka.streams.internals.metrics.ClientMetrics)
java.lang.NullPointerException: inStream parameter is null
	at java.base/java.util.Objects.requireNonNull(Objects.java:233)
	at java.base/java.util.Properties.load(Properties.java:408)
	at org.apache.kafka.streams.internals.metrics.ClientMetrics.<clinit>(ClientMetrics.java:53)
	at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:894)
	at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:856)
	at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:826)
	at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:738)
	at io.confluent.examples.streams.AnomalyDetectionLambdaExample.main(AnomalyDetectionLambdaExample.java:159)
[2023-05-14 18:11:48,988] WARN [main] stream-thread [main] Failed to delete state store directory of /var/folders/kc/tw2ty9r11f34925hs5ff_3yh0000gn/T/kafka-streams/anomaly-detection-lambda-example for it is not empty (org.apache.kafka.streams.processor.internals.StateDirectory)
[2023-05-14 18:11:49,052] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 2 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)
[2023-05-14 18:11:49,157] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 7 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)

You say after the PR you have, those WARN will not appear anymore?

@asafm
Copy link
Contributor Author

asafm commented May 14, 2023

Can you explain again the group config property? In Kafka I don't have any issue. Are you saying I should configure that property for KoP?

I tried adding this to my env for Pulsar at docker compose file:

- PULSAR_PREFIX_groupInitialRebalanceDelayMs=0

but I still see a lot of

[2023-05-14 18:24:10,527] WARN [anomaly-detection-lambda-example-client-StreamThread-1] [Consumer clientId=anomaly-detection-lambda-example-client-StreamThread-1-consumer, groupId=anomaly-detection-lambda-example] Error while fetching metadata with correlation id 1014 : {anomaly-detection-lambda-example-KSTREAM-AGGREGATE-STATE-STORE-0000000002-repartition=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)

@BewareMyPower
Copy link
Collaborator

There are two issues.

  1. The UNKNOWN_TOPIC_OR_PARTITION error is retryable. It means the topic's leader is not available temporarily. From my local test against a Kafka server, this error still appeared.

Even with the Kafka server , you will still see the following warning logs at the client side.

  1. Regarding the groupInitialRebalanceDelayMs config, it's the same as the group.initial.rebalance.delay.ms config in Kafka, see: https://kafka.apache.org/documentation/#brokerconfigs_group.initial.rebalance.delay.ms. You can check the config/server.properties from any Kafka's release binary.
kafka_2.13-3.3.1$ grep "group.initial.rebalance" config/server.properties
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
group.initial.rebalance.delay.ms=0

@asafm
Copy link
Contributor Author

asafm commented May 21, 2023

There are two issues.

  1. The UNKNOWN_TOPIC_OR_PARTITION error is retryable. It means the topic's leader is not available temporarily. From my local test against a Kafka server, this error still appeared.

Even with the Kafka server , you will still see the following warning logs at the client side.

Yes, both Kafka and KoP produce this WARN on the client, BUT, Kafka has this WARN message appears once, and KoP many times. See my comment before to see exactly how many times as I pasted the output for both.

  1. Regarding the groupInitialRebalanceDelayMs config, it's the same as the group.initial.rebalance.delay.ms config in Kafka, see: https://kafka.apache.org/documentation/#brokerconfigs_group.initial.rebalance.delay.ms. You can check the config/server.properties from any Kafka's release binary.

I guess I don't understand the motivation for talking about that config? What do we aim to solve by using that config ?

kafka_2.13-3.3.1$ grep "group.initial.rebalance" config/server.properties
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
group.initial.rebalance.delay.ms=0

@BewareMyPower
Copy link
Collaborator

BUT, Kafka has this WARN message appears once, and KoP many times.

Yes. It should be an issue. I think you can open another one for that. This issue might already go far.

I guess I don't understand the motivation for talking about that config?

Because KoP might have the following logs while Kafka did not because of the difference of that config.

rebalance failed due to 'The coordinator is loading and hence can't process requests.' (CoordinatorLoadInProgressException)

@asafm
Copy link
Contributor Author

asafm commented May 22, 2023

Opened: #1858

@asafm asafm closed this as completed May 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants