Hi Ismael,
Not sure if my response on RC1 was lost or this issue is not a show-stopper:
I checked again and with RC2, tests still fail in my Windown 64 bit environment.
:clients:checkstyleMain
[ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1: Class Data Abstraction Coupling is 57 (max allowed is 20) classes [ApiExceptionBuilder, BrokerNotAvailableException, ClusterAuthorizationException, ConcurrentTransactionsException, ControllerMovedException, CoordinatorLoadInProgressException, CoordinatorNotAvailableException, CorruptRecordException, DuplicateSequenceNumberException, GroupAuthorizationException, IllegalGenerationException, IllegalSaslStateException, InconsistentGroupProtocolException, InvalidCommitOffsetSizeException, InvalidConfigurationException, InvalidFetchSizeException, InvalidGroupIdException, InvalidPartitionsException, InvalidPidMappingException, InvalidReplicaAssignmentException, InvalidReplicationFactorException, InvalidRequestException, InvalidRequiredAcksException, InvalidSessionTimeoutException, InvalidTimestampException, InvalidTopicException, InvalidTxnStateException, InvalidTxnTimeoutException, LeaderNotAvailableException, NetworkException, NotControllerException, NotCoordinatorException, NotEnoughReplicasAfterAppendException, NotEnoughReplicasException, NotLeaderForPartitionException, OffsetMetadataTooLarge, OffsetOutOfRangeException, OperationNotAttemptedException, OutOfOrderSequenceException, PolicyViolationException, ProducerFencedException, RebalanceInProgressException, RecordBatchTooLargeException, RecordTooLargeException, ReplicaNotAvailableException, SecurityDisabledException, TimeoutException, TopicAuthorizationException, TopicExistsException, TransactionCoordinatorFencedException, TransactionalIdAuthorizationException, UnknownMemberIdException, UnknownServerException, UnknownTopicOrPartitionException, UnsupportedForMessageFormatException, UnsupportedSaslMechanismException, UnsupportedVersionException]. [ClassDataAbstractionCoupling]
[ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1: Class Fan-Out Complexity is 60 (max allowed is 40). [ClassFanOutComplexity]
[ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\requests\AbstractRequest.java:26:1: Class Fan-Out Complexity is 43 (max allowed is 40). [ClassFanOutComplexity]
[ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\requests\AbstractResponse.java:26:1: Class Fan-Out Complexity is 42 (max allowed is 40). [ClassFanOutComplexity]
:clients:checkstyleMain FAILED
FAILURE: Build failed with an exception.
Thanks.
--Vahid
Hi Ismael,
To answer your questions:
1. Yes, the issues exists in trunk too.
2. I haven't checked with Cygwin, but I can give it a try.
And thanks for addressing this issue. I can confirm with your PR I no
longer see it.
But now that the tests progress I see quite a few errors like this in
core:
kafka.server.ReplicaFetchTest > classMethod FAILED
java.lang.AssertionError: Found unexpected threads,
allThreads=Set(ZkClient-EventThread-268-127.0.0.1:56565,
ProcessThread(sid:0 cport:56565):, metrics-mete
r-tick-thread-2, SessionTracker, Signal Dispatcher, main, Reference
Handler, ForkJoinPool-1-worker-1, Attach Listener, ProcessThread(sid:0
cport:59720):, ZkClie
nt-EventThread-1347-127.0.0.1:59720, kafka-producer-network-thread |
producer-1, Test worker-SendThread(127.0.0.1:56565), /127.0.0.1:54942 to
/127.0.0.1:54926 w
orkers Thread 2, Test worker, SyncThread:0,
NIOServerCxn.Factory:/127.0.0.1:0, Test worker-EventThread, Test
worker-SendThread(127.0.0.1:59720), /127.0.0.1:5494
2 to /127.0.0.1:54926 workers Thread 3,
ZkClient-EventThread-22-127.0.0.1:54976, ProcessThread(sid:0
cport:54976):, Test worker-SendThread(127.0.0.1:54976), Fin
alizer, metrics-meter-tick-thread-1)
I tested on a VM and a physical machine, and both give me a lot of errors
like this.
> ------------------------------
Hi Ismael,
This is the output of core tests from the start until the first failed test.
kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenRacks PASSED
kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware PASSED
kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenReplicas PASSED
kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned PASSED
kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware PASSED
kafka.admin.AdminRackAwareTest > testRackAwareExpansion PASSED
kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAwareWith6Partitions PASSED
kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers PASSED
kafka.admin.AdminRackAwareTest > testGetRackAlternatedBrokerListAndAssignReplicasToBrokers PASSED
kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED
kafka.admin.AdminRackAwareTest > testSingleRack PASSED
kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithRandomStartIndex PASSED
kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment PASSED
kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED
kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED
kafka.admin.ConfigCommandTest > testScramCredentials PASSED
kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType PASSED
kafka.admin.ConfigCommandTest > testUserClientQuotaOpts PASSED
kafka.admin.ConfigCommandTest > shouldAddTopicConfig PASSED
kafka.admin.ConfigCommandTest > shouldAddClientConfig PASSED
kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig PASSED
kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED
kafka.admin.ConfigCommandTest > testQuotaConfigEntity PASSED
kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedBracketConfig PASSED
kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED
kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED
kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted PASSED
kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED
kafka.admin.BrokerApiVersionsCommandTest > checkBrokerApiVersionCommandOutput PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldRemoveThrottleReplicaListBasedOnProposedAssignment PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicasMultipleTopics PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldNotOverwriteExistingPropertiesWhenLimitIsAdded PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicasMultipleTopicsAndPartitions PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldRemoveThrottleLimitFromAllBrokers PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicas PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicasMultiplePartitions PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldSetQuotaLimit PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicasWhenProposedIsSubsetOfExisting PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldUpdateQuotaLimit PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldFindTwoMovingReplicasInSamePartition PASSED
kafka.admin.ReassignPartitionsCommandTest > shouldNotOverwriteEntityConfigsWhenUpdatingThrottledReplicas PASSED
kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedEntityName PASSED
kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues PASSED
kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig PASSED
kafka.admin.DeleteConsumerGroupTest > testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics PASSED
kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED
kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType PASSED
kafka.admin.ConfigCommandTest > shouldAddBrokerConfig PASSED
kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED
kafka.admin.ConfigCommandTest > testQuotaDescribeEntities PASSED
kafka.admin.AdminTest > testGetBrokerMetadatas PASSED
kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType PASSED
kafka.admin.AclCommandTest > testAclCli PASSED
kafka.admin.ReassignPartitionsIntegrationTest > testRackAwareReassign PASSED
kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED
kafka.admin.ReassignPartitionsClusterTest > shouldExecuteThrottledReassignment FAILED
java.nio.file.FileSystemException: C:\Users\IBM_AD~1\AppData\Local\Temp\kafka-719085320148197500\my-topic-0\00000000000000000000.index: The process cannot access the file because it is being used by another process.
From the error message, it sounds like one of the prior tests does not do a proper clean-up?!
--
You received this message because you are subscribed to the Google Groups "kafka-clients" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kafka-clients+unsubscribe@googlegroups.com.
To post to this group, send email to kafka-...@googlegroups.com.
Visit this group at https://groups.google.com/group/kafka-clients.
To view this discussion on the web visit https://groups.google.com/d/msgid/kafka-clients/CAD5tkZYey%3DLSri2NpGnh1RNGfCTtZRJyCNMOU7cF0nZW5Ec38g%40mail.gmail.com.
Hi,
Heroku has been doing additional performance testing on (1) log compaction
and, separately (2) Go clients with older message format against 0.11-rc2
brokers with new message format.
For log compaction, we've tested with messages using a single key, messages
using unique keys, and messages with a bounded key range. There were no
notable negative performance impacts.
For client testing with old format vs new format, we had Sarama Go async
producer clients speaking their older client protocol versions and had
messages producing in a tight loop. This resulted in a high percentage of
errors, though some messages went through:
Failed to produce message kafka: Failed to produce message to topic
rc2-topic: kafka server: Message was too large, server rejected it to avoid
allocation error.
Although this is to be expected as mentioned in the docs (
http://kafka.apache.org/0110/documentation.html#upgrade_11_message_format)
where in aggregate messages may become larger than max.message.bytes from
the broker, we'd like to point out that this might be confusing for users
running older clients against 0.11. That said, users can however work
around this issue by tuning their request size to be less than
max.message.bytes.
This, along with the testing previously mentioned by Tom wraps up our
performance testing. Overall, we're a +1 (non-binding) for this release,
but wanted to point out the client issue above.
Thanks,
Jeff
On Mon, Jun 26, 2017 at 12:41 PM, Vahid S Hashemian <
> Thanks.
> --Vahid
>
>
>
>
> From: Ismael Juma <ism...@gmail.com>
> To: Vahid S Hashemian <vahidha...@us.ibm.com>
> Cc: d...@kafka.apache.org, kafka-clients
> <kafka-...@googlegroups.com>, Kafka Users <us...@kafka.apache.org>
> Date: 06/26/2017 03:53 AM
> Subject: Re: [VOTE] 0.11.0.0 RC2
>
>
>
> > ------------------------------
+1Verified 0110 web docs and java docs; verified quick start with 2.11 / 2.12 scala versions.One minor observation: on the web docs we show the cmd for 2.11 scala version; we'd better make it templated with the version number.
12>tar-xzf kafka_2.11-0.10.2.0.tgz>cdkafka_2.11-0.10.2.0
On Mon, Jun 26, 2017 at 3:53 PM, Ismael Juma <ism...@gmail.com> wrote:
> ------------------------------
---- Guozhang
> ------------------------------