Am 23.01.20 um 16:13 schrieb Avi Kivity:
>
> On 23/01/2020 16.29, Michael wrote:
>> Am 23.01.20 um 14:09 schrieb Avi Kivity:
>>
>>
>>>
>>> Another thing to try is to restart all nodes in the cluster (in order
>>> to clear their caches) and run a select for that key from cqlsh with
>>> TRACING ON. This may provide more information.
>>>
>>
here is the tracing of a select for a missing partition (more if you want):
cqlsh> select key from errorness_table where key =
0x3e6c39ce121da0d99a38e4f1c85cc09c;
key
-----
(0 rows)
Tracing session: eef25710-3df6-11ea-b139-000000000004
activity
| timestamp | source | source_elapsed | client
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------+--------------+----------------+-----------
Execute CQL3 query | 2020-01-23 15:42:16.833000 | 192.168.0.15
| 0 | 127.0.0.1
Parsing a statement [shard 0] | 2020-01-23 15:42:16.833733 |
192.168.0.15 | 1 | 127.0.0.1
Processing a statement [shard 0] | 2020-01-23 15:42:16.833837 |
192.168.0.15 | 104 | 127.0.0.1
Creating read executor for token 2773306311014603495 with all:
{192.168.0.8, 192.168.0.9, 192.168.0.13} targets: {192.168.0.8,
192.168.0.9} repair decision: NONE [shard 0] | 2020-01-23
15:42:16.833968 | 192.168.0.15 | 236 | 127.0.0.1
read_digest: sending a message to /
192.168.0.9 [shard 0] | 2020-01-23
15:42:16.833984 | 192.168.0.15 | 251 | 127.0.0.1
read_data: sending a message to /
192.168.0.8 [shard 0] | 2020-01-23
15:42:16.834015 | 192.168.0.15 | 282 | 127.0.0.1
read_data: message received from /
192.168.0.15 [shard 0] | 2020-01-23
15:42:16.834615 | 192.168.0.8 | 22 | 127.0.0.1
read_digest: message received from /
192.168.0.15 [shard 0] | 2020-01-23
15:42:16.834664 | 192.168.0.9 | 14 | 127.0.0.1
Start querying the token range that starts with 2773306311014603495
[shard 6] | 2020-01-23 15:42:16.834769 | 192.168.0.9 | 11 |
127.0.0.1
Start querying the token range that starts with 2773306311014603495
[shard 6] | 2020-01-23 15:42:16.834789 | 192.168.0.8 | 18 |
127.0.0.1
Querying is done [shard 6] | 2020-01-23 15:42:16.834829 | 192.168.0.9
| 71 | 127.0.0.1
Querying is done [shard 6] | 2020-01-23 15:42:16.834885 | 192.168.0.8
| 115 | 127.0.0.1
read_digest handling is done, sending a response to /
192.168.0.15 [shard
0] | 2020-01-23 15:42:16.835066 | 192.168.0.9 | 416 | 127.0.0.1
read_data handling is done, sending a response to /
192.168.0.15 [shard
0] | 2020-01-23 15:42:16.835286 | 192.168.0.8 | 692 | 127.0.0.1
read_digest: got response from /
192.168.0.9 [shard 0] | 2020-01-23
15:42:16.835507 | 192.168.0.15 | 1774 | 127.0.0.1
read_data: got response from /
192.168.0.8 [shard 0] | 2020-01-23
15:42:16.836269 | 192.168.0.15 | 2536 | 127.0.0.1
Done processing - preparing a result [shard 0] | 2020-01-23
15:42:16.836328 | 192.168.0.15 | 2595 | 127.0.0.1
Request complete | 2020-01-23 15:42:16.835612 | 192.168.0.15 |
2612 | 127.0.0.1
>>
>
> Ok, let's see what the tracing results are.
>
>
> Can you share the tokens for missing partitions, along with the number
> of shards per node? I'll see if they match a shard boundary.
>
Some tokens from missing partitions (there are eight shards):
1228594666495504249, 3651180218813750844, 6082364005475862294,
2767066932624749567, 6386387569957585656, 8737956552098038433