I have 8-node cluster running in AWS (4 DSs, 2 boxes each)
sh-4.4$ nodetool status
Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.31.xx.xxx 7.29 MB 256 ? ab431b6f-538f-4a77-b6ba-ae01820328c1 alpha
UN 172.31.xx.xxx 9.32 MB 256 ? 6bb031d3-ecde-4151-a1ae-f33d0189a091 beta
Datacenter: DC2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.31.xx.x 7.92 MB 256 ? 817e0515-88f0-4bea-99d2-3da80115f41a beta
UN 172.31.xx.xx 8.9 MB 256 ? 22462f0c-c19c-4dc0-b6b0-faa5f7413376 alpha
Datacenter: DC3
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.31.xx.xx 8.3 MB 256 ? 3f2d70b3-6159-4b5d-9e67-5d805e6da058 alpha
UN 172.31.xx.xxx 9.43 MB 256 ? 917415d9-5568-45ea-9c50-aa4a9e573604 beta
Datacenter: DC4
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 172.31.xx.xxx 7.88 MB 256 ? 1029bd26-0e1f-434a-8edb-428ed0780bdb alpha
UN 172.31.xx.xx 8.18 MB 256 ? 57a8c866-0d00-4ded-aa5c-5c4ada9e4bcd beta
On a separate box, I have setup ScyllaDB manager. I want to use my cluster DB to store any manager related data (rather than setting-up another stand-alone instance for manager), hence I crated new key space:
CREATE KEYSPACE IF NOT EXISTS n3_scylla_manager WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};
Manager has following configuration (right now pointing to two boxes in DC1):
# Scylla Manager database, used to store management data.
database:
hosts:
- 172.31.xx.xxx
- 172.31.xx.xxx
# Local datacenter name, specify if using a remote, multi-dc cluster.
# local_dc: DC1
#
# Keyspace for management data.
keyspace: n3_scylla_manager
# replication_factor: 3
if I leave the portion in red commented out, ScyllaDB manager start-up fine, but I am pretty sure it is a wrong configuration. If I uncomment that line, it blows-up on the start. I have read that for multi-DC setup, the name should be to the closest DC to the manager (in my case it does not matter, but let's say it is DC1). On a start, I get the following error:
{"L":"INFO","T":"2022-10-25T20:26:33.192Z","M":"Migrating schema","n3_scylla_manager","_trace_id":"vYvYWhS4TIS_4yN_nLdTnA"}
{"L":"ERROR","T":"2022-10-25T20:26:33.766Z","M":"Bye","error":"db init: list migrations: Cannot achieve consistency level for cl LOCAL_QUORUM. Requires 1, alive 0
STARTUP ERROR: db init: list migrations: Cannot achieve consistency level for cl LOCAL_QUORUM. Requires 1, alive 0","_trace_id":"vYvYWhS4TIS_4yN_nLdTnA","errorStack":"......
systemd[1]: scylla-manager.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: scylla-manager.service: Failed with result 'exit-code'.
Tried to find the solution, but no success yet.. What am I doing wrong?