Not enough replicas available for query at consistency LOCAL_ONE (1 required but only 0 alive)

795 views
Skip to first unread message

Young Ringer

<yangly0815@gmail.com>
unread,
Jul 28, 2021, 2:35:32 AM7/28/21
to ScyllaDB users
My cluster is 15 nodes crossing 3 DC and five nodes for one DC,and the nodes are all UN,but I get an error “Not enough replicas available for query at consistency QUORUM (5 required but only 4 alive)” ,and my keyspace RF is 3 for every DC, total 9. 

```
cqlsh> desc alternator_testcdc2;

CREATE KEYSPACE alternator_testcdc2 WITH replication = {'class': 'NetworkTopologyStrategy', 'DC01': '3', 'DC02': '3', 'DC03': '3'}  AND durable_writes = true;

CREATE TABLE alternator_testcdc2.testcdc2 (
    cdcpk text,
    cdcsk text,
    ":attrs" map<text, blob>,
    PRIMARY KEY (cdcpk, cdcsk)
) WITH CLUSTERING ORDER BY (cdcsk ASC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'keys': 'ALL', 'rows_per_partition': 'ALL'}
    AND comment = ''
    AND compaction = {'class': 'SizeTieredCompactionStrategy'}
    AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.0
    AND default_time_to_live = 0
    AND gc_grace_seconds = 864000
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99.0PERCENTILE';

scylla_tags = {}

cdc = {'postimage': 'false', 'preimage': 'false', 'ttl': '86400', 'enabled': 'true', 'delta': 'full'}

CREATE TABLE alternator_testcdc2.testcdc2_scylla_cdc_log (
    "cdc$stream_id" blob,
    "cdc$time" timeuuid,
    "cdc$batch_seq_no" int,
    ":attrs" frozen<map<text, blob>>,
    "cdc$deleted_:attrs" boolean,
    "cdc$deleted_elements_:attrs" frozen<set<text>>,
    "cdc$end_of_batch" boolean,
    "cdc$operation" tinyint,
    "cdc$ttl" bigint,
    cdcpk text,
    cdcsk text,
    PRIMARY KEY ("cdc$stream_id", "cdc$time", "cdc$batch_seq_no")
) WITH CLUSTERING ORDER BY ("cdc$time" ASC, "cdc$batch_seq_no" ASC)
    AND bloom_filter_fp_chance = 0.01
    AND caching = {'enabled': 'false', 'keys': 'NONE', 'rows_per_partition': 'NONE'}
    AND comment = 'CDC log for alternator_testcdc2.testcdc2'
    AND compaction = {'class': 'TimeWindowCompactionStrategy', 'compaction_window_size': '60', 'compaction_window_unit': 'MINUTES', 'expired_sstable_check_frequency_seconds': '1800'}
    AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
    AND crc_check_chance = 1.0
    AND dclocal_read_repair_chance = 0.0
    AND default_time_to_live = 0
    AND gc_grace_seconds = 0
    AND max_index_interval = 2048
    AND memtable_flush_period_in_ms = 0
    AND min_index_interval = 128
    AND read_repair_chance = 0.0
    AND speculative_retry = '99.0PERCENTILE';

cqlsh>

```
And my cluster as followings:
```
[root@node-01 ~]# nodetool status
Using /etc/scylla/scylla.yaml as the config file
Datacenter: DC01
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  192.168.75.7   1.44 TB    256          ?       f1d744c8-5342-4904-98e1-71112ca776d2  DC01_Rack01
UN  192.168.75.11  1.5 TB     256          ?       ecc4ec47-f2a4-4f29-a50e-8f9f353d2a04  DC01_Rack01
UN  192.168.75.10  1.42 TB    256          ?       87e2186d-f75e-4e36-a13e-208c5719a57d  DC01_Rack01
UN  192.168.75.9   1.5 TB     256          ?       10073da2-b7d4-4362-8b55-0f319a3028fa  DC01_Rack01
UN  192.168.75.8   1.57 TB    256          ?       6c1742c8-3353-4256-9a64-0a6e126e9377  DC01_Rack01
Datacenter: DC02
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  192.168.76.7   1.44 TB    256          ?       7caa8e1d-670d-4c6e-bd65-73e9e1038cde  DC02_Rack01
UN  192.168.76.11  1.41 TB    256          ?       244b5c43-595a-4981-aaf1-ee7361f795b4  DC02_Rack01
UN  192.168.76.10  1.46 TB    256          ?       2e3eeab2-b5fa-4b55-a8b2-1cc188c9ddb1  DC02_Rack01
UN  192.168.76.9   1.53 TB    256          ?       d5c6329a-52ea-4764-b57d-235c59cff8cf  DC02_Rack01
UN  192.168.76.8   1.57 TB    256          ?       bc0879e4-df41-43b8-871a-5f9affa5cd51  DC02_Rack01
Datacenter: DC03
==========================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens       Owns    Host ID                               Rack
UN  192.168.77.71  1.55 TB    256          ?       3ef50cf5-926d-4788-bf04-d7950e037514  DC03_Rack01
UN  192.168.77.70  1.57 TB    256          ?       aa9da3fc-bfb6-48c2-9e45-667177e1b61c  DC03_Rack01
UN  192.168.77.73  1.5 TB     256          ?       34f02fef-27ce-4565-bef2-0f029f29b8de  DC03_Rack01
UN  192.168.77.72  1.54 TB    256          ?       ef2786a4-0e03-439a-8383-f919bd4b9cc0  DC03_Rack01
UN  192.168.77.74  1.55 TB    256          ?       876a5b20-a061-4d24-bae0-c127fecae025  DC03_Rack01

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless


```

Benny Halevy

<bhalevy@scylladb.com>
unread,
Aug 1, 2021, 8:37:36 AM8/1/21
to Young Ringer, scylladb-users@googlegroups.com
Hi,

What version are you using?
Any other interesting errors / warnings in the logs?
Are other nodes overloaded for some reason to the point
they'll cause timeouts?
> --
> You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/1df05b9e-57c3-4df0-a9a7-dba3d766cf91n%40googlegroups.com.


Young Ringer

<yangly0815@gmail.com>
unread,
Aug 2, 2021, 5:55:20 AM8/2/21
to ScyllaDB users
The scylla version I used was 4.4.1 and I got the error when I use scylla-cdc tools.
My source cluster was 15 node and my destination cluster was 3 nodes. 
When I use scylla-cdc-java tools and I execute scylla-cdc-replicator, I got the problem. But my source cluster node did not got the errors in their logs.

Reply all
Reply to author
Forward
0 new messages