ksqlDB 0.21 || Drop table delete topic command frequently going in "Executing statement" state

22 views
Skip to first unread message

Anup Tiwari

unread,
Nov 25, 2021, 2:15:15 AM11/25/21
to ksqldb-users
Hi Team,

We are using ksqlDB 0.21 and have observed one frequent behaviour that whenever I try to drop a set of tables(drop table t1 delete topic ;) then
a few tables go in "Executing statement" state and due to this we are not able to submit any other drop statement and even ksqlDB health check API returns unhealthy status till the earlier one completes.
Below are logs of one of the drop statements. Let me know if anything needs to be tuned at our end to avoid this frequent issue and blocking along with health check issues.

Logs :

 ksql-server-start: [2021-11-22 12:20:21,629] INFO Executing statement: DROP TABLE MMA_TEST_TABLE; (io.confluent.ksql.rest.server.computation.CommandRunner:339)
 ksql-server-start: [2021-11-22 12:25:21,630] INFO stream-client [_confluent-ksql-poc_query_CTAS_MMA_TEST_TABLE_527-a4c684e4-036e-4359-aefe-ea72c5ce7727] Streams client cannot stop completely within the timeout (org.apache.kafka.streams.KafkaStreams:1382)
 ksql-server-start: [2021-11-22 12:25:21,631] WARN query has not terminated even after close. This may happen when streams threads are hung. State: PENDING_SHUTDOWN (io.confluent.ksql.util.QueryMetadataImpl:344)
 ksql-server-start: [2021-11-22 12:25:21,631] WARN Query has not successfully closed, skipping cleanup (io.confluent.ksql.util.QueryMetadataImpl:371)
 ksql-server-start: [2021-11-22 12:25:21,631] INFO Executed statement: DROP TABLE MMA_TEST_TABLE; (io.confluent.ksql.rest.server.computation.CommandRunner:346)
 ksql-server-start: [2021-11-22 12:25:21,650] WARN Deleted local state store for non-existing query _confluent-ksql-poc_query_CTAS_MMA_TEST_TABLE_527. Thisis not expected and was likely due to a race condition when the query was dropped before. (io.confluent.ksql.engine.QueryCleanupService:121)
 ksql-server-start: [2021-11-22 12:25:21,660] WARN Failed to cleanup internal consumer groups for _confluent-ksql-poc_query_CTAS_MMA_TEST_TABLE_527 (io.confluent.ksql.engine.QueryCleanupService:149)
 ksql-server-start: io.confluent.ksql.exception.KafkaResponseGetFailedException: Failed to delete consumer groups: [_confluent-ksql-poc_query_CTAS_MMA_TEST_TABLE_527]
 ksql-server-start: at io.confluent.ksql.services.KafkaConsumerGroupClientImpl.deleteConsumerGroups(KafkaConsumerGroupClientImpl.java:124)
 ksql-server-start: at io.confluent.ksql.engine.QueryCleanupService$QueryCleanupTask.lambda$run$2(QueryCleanupService.java:141)
 ksql-server-start: at io.confluent.ksql.engine.QueryCleanupService$QueryCleanupTask.tryRun(QueryCleanupService.java:147)
 ksql-server-start: at io.confluent.ksql.engine.QueryCleanupService$QueryCleanupTask.run(QueryCleanupService.java:138)
 ksql-server-start: at io.confluent.ksql.engine.QueryCleanupService.run(QueryCleanupService.java:63)
 ksql-server-start: at com.google.common.util.concurrent.AbstractExecutionThreadService$1$2.run(AbstractExecutionThreadService.java:66)
 ksql-server-start: at com.google.common.util.concurrent.Callables$4.run(Callables.java:117)
 ksql-server-start: at java.lang.Thread.run(Thread.java:748)
 ksql-server-start: Caused by: org.apache.kafka.common.errors.GroupIdNotFoundException: The group id does not exist.
 ksql-server-start: [2021-11-22 12:25:21,753] ERROR stream-thread [_confluent-ksql-poc_query_CTAS_MMA_TEST_TABLE_527-a4c684e4-036e-4359-aefe-ea72c5ce7727-StreamThread-1] task [0_0] Error encountered sending record to topic MMA_TEST_TABLE for task 0_0 due to:
 ksql-server-start: org.apache.kafka.common.errors.TimeoutException: Expiring 17 record(s) for MMA_TEST_TABLE-0:300101 ms has passed since batch creation
 ksql-server-start: The broker is either slow or in bad state (like not having enough replicas) in responding the request, or the connection to broker was interrupted sending the request or receiving the response.
 ksql-server-start: Consider overwriting `max.block.ms` and /or `delivery.timeout.ms` to a larger value to wait longer for such scenarios and avoid timeout errors (org.apache.kafka.streams.processor.internals.RecordCollectorImpl:234)
 ksql-server-start: org.apache.kafka.common.errors.TimeoutException: Expiring 17 record(s) for MMA_TEST_TABLE-0:300101 ms has passed since batch creation

Regards,
Anup Tiwari

Anup Tiwari

unread,
Nov 29, 2021, 10:15:22 AM11/29/21
to ksqldb-users
Hi Team,

Can someone check this and share any pointers ?

Regards,
Anup Tiwari

Reply all
Reply to author
Forward
0 new messages