[Scylla 1.6.3] Can not setup scylla cluster using more than one seed node

1,152 views
Skip to first unread message

Ming Yan

<yming0221@gmail.com>
unread,
May 5, 2017, 1:31:02 AM5/5/17
to ScyllaDB users
Hi all,

I am setup scylla cluster of three nodes. And it is  simple when I use the first node as the only seed.
Because of more than one seeds nodes is sugguested, I can not start the cluster properly when I use the first two nodes as seed nodes.

I have three nodes:
 
10.123.101.38(seed)
10.123.101.39(seed) 
10.123.102.35

When I start the first node(10.123.101.38), output is normal.

INFO  2017-05-05 13:19:40,349 [shard 0] schema_tables - Creating system_traces.node_slow_log id=bfcc4e62-5b63-3aa1-a1c3-6f5e47f3325c version=52fb6dda-802f-326f-91e0-bf142d54a831
INFO  2017-05-05 13:19:40,350 [shard 0] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,350 [shard 22] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,350 [shard 11] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,350 [shard 8] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 5] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 3] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 7] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 1] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 6] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 2] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 10] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 15] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 9] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 4] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 23] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 16] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 21] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 19] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 18] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 17] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 20] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 14] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 13] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,351 [shard 12] database - Setting compaction strategy of system_traces.node_slow_log to SizeTieredCompactionStrategy
INFO  2017-05-05 13:19:40,626 [shard 0] database - Schema version changed to 82127e6a-967e-3f04-8785-edc8ff7e793e
INFO  2017-05-05 13:19:40,626 [shard 0] storage_service - Starting listening for CQL clients on 10.123.101.38:9042...
INFO  2017-05-05 13:19:40,626 [shard 0] storage_service - Thrift server listening on 10.123.101.38:9160 ...


When I start the second node(10.123.101.39), its log is normal as above. But the first node(10.123.101.38) output error message:
WARN  2017-05-05 13:20:17,710 [shard 0] gossip - Fail to send EchoMessage to 10.123.101.39:0: rpc::timeout_error (rpc call timed out)
ERROR 2017-05-05 13:20:17,743 [shard 0] migration_task - Can't send migration request: node 10.123.101.39 is down.
ERROR 2017-05-05 13:20:17,745 [shard 0] migration_task - Can't send migration request: node 10.123.101.39 is down.
WARN  2017-05-05 13:20:18,715 [shard 0] gossip - Fail to send EchoMessage to 10.123.101.39:0: rpc::timeout_error (rpc call timed out)
WARN  2017-05-05 13:20:19,722 [shard 0] gossip - Fail to send EchoMessage to 10.123.101.39:0: rpc::timeout_error (rpc call timed out)
WARN  2017-05-05 13:20:20,730 [shard 0] gossip - Fail to send EchoMessage to 10.123.101.39:0: rpc::timeout_error (rpc call timed out)
WARN  2017-05-05 13:20:21,731 [shard 0] gossip - Fail to send EchoMessage to 10.123.101.39:0: rpc::timeout_error (rpc call timed out)
WARN  2017-05-05 13:20:22,737 [shard 0] gossip - Fail to send EchoMessage to 10.123.101.39:0: rpc::timeout_error (rpc call timed out)
ERROR 2017-05-05 13:20:22,760 [shard 0] migration_task - Can't send migration request: node 10.123.101.39 is down.
WARN  2017-05-05 13:20:23,748 [shard 0] gossip - Fail to send EchoMessage to 10.123.101.39:0: rpc::timeout_error (rpc call timed out)
ERROR 2017-05-05 13:20:23,763 [shard 0] migration_task - Can't send migration request: node 10.123.101.39 is down.
WARN  2017-05-05 13:20:24,748 [shard 0] gossip - Fail to send EchoMessage to 10.123.101.39:0: rpc::timeout_error (rpc call timed out)
WARN  2017-05-05 13:20:25,760 [shard 0] gossip - Fail to send EchoMessage to 10.123.101.39:0: rpc::timeout_error (rpc call timed out)


Anyone can help?

Asias He

<asias@scylladb.com>
unread,
May 5, 2017, 2:35:23 AM5/5/17
to scylladb-users@googlegroups.com
Hello, 

can you share scylla.yaml of both nodes?

--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/327172fe-685c-468f-bba9-6321b1f90cb7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Message has been deleted

Asias He

<asias@scylladb.com>
unread,
May 5, 2017, 3:59:35 AM5/5/17
to ScyllaDB users
The config looks good. 

Can you run 

nodetool gossipinfo 

and 

nodetool status

on both nodes?

Also, is there any firewall rules which blocks 7100 port between the two nodes. 

The log says node 1 think node 2 is down. Is node 2 running correctly?


On Fri, May 5, 2017 at 3:51 PM, Ming Yan <ymin...@gmail.com> wrote:
yeah,

first node config file
# Scylla storage config YAML

#######################################
# This file is split to two sections:
# 1. Supported parameters
# 2. Unsupported parameters: reserved for future use or backwards
#    compatibility.
# Scylla will only read and use the first segment
#######################################

### Supported Parameters

# The name of the cluster. This is mainly used to prevent machines in
# one logical cluster from joining another.
cluster_name: 'Test Cluster'

# This defines the number of tokens randomly assigned to this node on the ring
# The more tokens, relative to other nodes, the larger the proportion of data
# that this node will store. You probably want all nodes to have the same number
# of tokens assuming they have equal hardware capability.
#
# If you already have a cluster with 1 token per node, and wish to migrate to
# multiple tokens per node, see http://wiki.apache.org/cassandra/Operations
num_tokens: 256

# Directory where Scylla should store data on disk.
# If not set, the default directory is $CASSANDRA_HOME/data/data.
data_file_directories:
    - /var/lib/scylla/data

# commit log.  when running on magnetic HDD, this should be a
# separate spindle than the data directories.
# If not set, the default directory is $CASSANDRA_HOME/data/commitlog.
commitlog_directory: /var/lib/scylla/commitlog

# commitlog_sync may be either "periodic" or "batch."
#
# When in batch mode, Scylla won't ack writes until the commit log
# has been fsynced to disk.  It will wait
# commitlog_sync_batch_window_in_ms milliseconds between fsyncs.
# This window should be kept short because the writer threads will
# be unable to do extra work while waiting.  (You may need to increase
# concurrent_writes for the same reason.)
#
# commitlog_sync: batch
# commitlog_sync_batch_window_in_ms: 2
#
# the other option is "periodic" where writes may be acked immediately
# and the CommitLog is simply synced every commitlog_sync_period_in_ms
# milliseconds.
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000

# The size of the individual commitlog file segments.  A commitlog
# segment may be archived, deleted, or recycled once all the data
# in it (potentially from each columnfamily in the system) has been
# flushed to sstables.
#
# The default size is 32, which is almost always fine, but if you are
# archiving commitlog segments (see commitlog_archiving.properties),
# then you probably want a finer granularity of archiving; 8 or 16 MB
# is reasonable.
commitlog_segment_size_in_mb: 32

# seed_provider class_name is saved for future use.
# seeds address are mandatory!
seed_provider:
    # Addresses of hosts that are deemed contact points.
    # Scylla nodes use this list of hosts to find each other and learn
    # the topology of the ring.  You must change this if you are running
    # multiple nodes!
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          # seeds is actually a comma-delimited list of addresses.
          # Ex: "<ip1>,<ip2>,<ip3>"
          - seeds: "10.123.101.38,10.123.101.39"

# Address or interface to bind to and tell other Scylla nodes to connect to.
# You _must_ change this if you want multiple nodes to be able to communicate!
#
# Setting listen_address to 0.0.0.0 is always wrong.
listen_address: 10.123.101.38

# Address to broadcast to other Scylla nodes
# Leaving this blank will set it to the same value as listen_address
# broadcast_address: 1.2.3.4

# port for the CQL native transport to listen for clients on
# For security reasons, you should not expose this port to the internet.  Firewall it if needed.
native_transport_port: 9042

# Throttles all outbound streaming file transfers on this node to the
# given total throughput in Mbps. This is necessary because Scylla does
# mostly sequential IO when streaming data during bootstrap or repair, which
# can lead to saturating the network connection and degrading rpc performance.
# When unset, the default is 200 Mbps or 25 MB/s.
# stream_throughput_outbound_megabits_per_sec: 200

# How long the coordinator should wait for read operations to complete
read_request_timeout_in_ms: 5000

# How long the coordinator should wait for writes to complete
write_request_timeout_in_ms: 2000

# phi value that must be reached for a host to be marked down.
# most users should never need to adjust this.
# phi_convict_threshold: 8

# IEndpointSnitch.  The snitch has two functions:
# - it teaches Scylla enough about your network topology to route
#   requests efficiently
# - it allows Scylla to spread replicas around your cluster to avoid
#   correlated failures. It does this by grouping machines into
#   "datacenters" and "racks."  Scylla will do its best not to have
#   more than one replica on the same "rack" (which may not actually
#   be a physical location)
#
# IF YOU CHANGE THE SNITCH AFTER DATA IS INSERTED INTO THE CLUSTER,
# YOU MUST RUN A FULL REPAIR, SINCE THE SNITCH AFFECTS WHERE REPLICAS
# ARE PLACED.
#
# Out of the box, Scylla provides
#  - SimpleSnitch:
#    Treats Strategy order as proximity. This can improve cache
#    locality when disabling read repair.  Only appropriate for
#    single-datacenter deployments.
#  - GossipingPropertyFileSnitch
#    This should be your go-to snitch for production use.  The rack
#    and datacenter for the local node are defined in
#    cassandra-rackdc.properties and propagated to other nodes via
#    gossip.  If cassandra-topology.properties exists, it is used as a
#    fallback, allowing migration from the PropertyFileSnitch.
#  - PropertyFileSnitch:
#    Proximity is determined by rack and data center, which are
#    explicitly configured in cassandra-topology.properties.
#  - Ec2Snitch:
#    Appropriate for EC2 deployments in a single Region. Loads Region
#    and Availability Zone information from the EC2 API. The Region is
#    treated as the datacenter, and the Availability Zone as the rack.
#    Only private IPs are used, so this will not work across multiple
#    Regions.
#  - Ec2MultiRegionSnitch:
#    Uses public IPs as broadcast_address to allow cross-region
#    connectivity.  (Thus, you should set seed addresses to the public
#    IP as well.) You will need to open the storage_port or
#    ssl_storage_port on the public IP firewall.  (For intra-Region
#    traffic, Scylla will switch to the private IP after
#    establishing a connection.)
#  - RackInferringSnitch:
#    Proximity is determined by rack and data center, which are
#    assumed to correspond to the 3rd and 2nd octet of each node's IP
#    address, respectively.  Unless this happens to match your
#    deployment conventions, this is best used as an example of
#    writing a custom Snitch class and is provided in that spirit.
#
# You can use a custom Snitch by setting this to the full class name
# of the snitch, which will be assumed to be on your classpath.
endpoint_snitch: SimpleSnitch

# The address or interface to bind the Thrift RPC service and native transport
# server to.
#
# Set rpc_address OR rpc_interface, not both. Interfaces must correspond
# to a single address, IP aliasing is not supported.
#
# Leaving rpc_address blank has the same effect as on listen_address
# (i.e. it will be based on the configured hostname of the node).
#
# Note that unlike listen_address, you can specify 0.0.0.0, but you must also
# set broadcast_rpc_address to a value other than 0.0.0.0.
#
# For security reasons, you should not expose this port to the internet.  Firewall it if needed.
#
# If you choose to specify the interface by name and the interface has an ipv4 and an ipv6 address
# you can specify which should be chosen using rpc_interface_prefer_ipv6. If false the first ipv4
# address will be used. If true the first ipv6 address will be used. Defaults to false preferring
# ipv4. If there is only one address it will be selected regardless of ipv4/ipv6.
rpc_address: 10.123.101.38
# rpc_interface: eth1
# rpc_interface_prefer_ipv6: false

# port for Thrift to listen for clients on
rpc_port: 9160

# port for REST API server
api_port: 10000

# IP for the REST API server
api_address: 10.123.101.38

# Log WARN on any batch size exceeding this value. 5kb per batch by default.
# Caution should be taken on increasing the size of this threshold as it can lead to node instability.
batch_size_warn_threshold_in_kb: 5

# Fail any multiple-partition batch exceeding this value. 50kb (10x warn threshold) by default.
batch_size_fail_threshold_in_kb: 50

# Authentication backend, identifying users
# Out of the box, Scylla provides org.apache.cassandra.auth.{AllowAllAuthenticator,
# PasswordAuthenticator}.
#
# - AllowAllAuthenticator performs no checks - set it to disable authentication.
# - PasswordAuthenticator relies on username/password pairs to authenticate
#   users. It keeps usernames and hashed passwords in system_auth.credentials table.
#   Please increase system_auth keyspace replication factor if you use this authenticator.
# authenticator: AllowAllAuthenticator

# Authorization backend, implementing IAuthorizer; used to limit access/provide permissions
# Out of the box, Scylla provides org.apache.cassandra.auth.{AllowAllAuthorizer,
# CassandraAuthorizer}.
#
# - AllowAllAuthorizer allows any action to any user - set it to disable authorization.
# - CassandraAuthorizer stores permissions in system_auth.permissions table. Please
#   increase system_auth keyspace replication factor if you use this authorizer.
# authorizer: AllowAllAuthorizer

# initial_token allows you to specify tokens manually.  While you can use # it with
# vnodes (num_tokens > 1, above) -- in which case you should provide a
# comma-separated list -- it's primarily used when adding nodes # to legacy clusters
# that do not have vnodes enabled.
# initial_token:

# RPC address to broadcast to drivers and other Scylla nodes. This cannot
# be set to 0.0.0.0. If left blank, this will be set to the value of
# rpc_address. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must
# be set.
# broadcast_rpc_address: 1.2.3.4

###################################################
## Not currently supported, reserved for future use
###################################################

# May either be "true" or "false" to enable globally, or contain a list
# of data centers to enable per-datacenter.
# hinted_handoff_enabled: DC1,DC2
# hinted_handoff_enabled: true

# this defines the maximum amount of time a dead host will have hints
# generated.  After it has been dead this long, new hints for it will not be
# created until it has been seen alive and gone down again.
# max_hint_window_in_ms: 10800000 # 3 hours

# Maximum throttle in KBs per second, per delivery thread.  This will be
# reduced proportionally to the number of nodes in the cluster.  (If there
# are two nodes in the cluster, each delivery thread will use the maximum
# rate; if there are three, each will throttle to half of the maximum,
# since we expect two nodes to be delivering hints simultaneously.)
# hinted_handoff_throttle_in_kb: 1024
# Number of threads with which to deliver hints;
# Consider increasing this number when you have multi-dc deployments, since
# cross-dc handoff tends to be slower
# max_hints_delivery_threads: 2

# Maximum throttle in KBs per second, total. This will be
# reduced proportionally to the number of nodes in the cluster.
# batchlog_replay_throttle_in_kb: 1024

# Validity period for permissions cache (fetching permissions can be an
# expensive operation depending on the authorizer, CassandraAuthorizer is
# one example). Defaults to 2000, set to 0 to disable.
# Will be disabled automatically for AllowAllAuthorizer.
# permissions_validity_in_ms: 2000

# Refresh interval for permissions cache (if enabled).
# After this interval, cache entries become eligible for refresh. Upon next
# access, an async reload is scheduled and the old value returned until it
# completes. If permissions_validity_in_ms is non-zero, then this must be
# also.
# Defaults to the same value as permissions_validity_in_ms.
# permissions_update_interval_in_ms: 1000

# The partitioner is responsible for distributing groups of rows (by
# partition key) across nodes in the cluster.  You should leave this
# alone for new clusters.  The partitioner can NOT be changed without
# reloading all data, so when upgrading you should set this to the
# same partitioner you were already using.
#
# Besides Murmur3Partitioner, partitioners included for backwards
# compatibility include RandomPartitioner, ByteOrderedPartitioner, and
# OrderPreservingPartitioner.
#
partitioner: org.apache.cassandra.dht.Murmur3Partitioner

# Maximum size of the key cache in memory.
#
# Each key cache hit saves 1 seek and each row cache hit saves 2 seeks at the
# minimum, sometimes more. The key cache is fairly tiny for the amount of
# time it saves, so it's worthwhile to use it at large numbers.
# The row cache saves even more time, but must contain the entire row,
# so it is extremely space-intensive. It's best to only use the
# row cache if you have hot rows or static rows.
#
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
#
# Default value is empty to make it "auto" (min(5% of Heap (in MB), 100MB)). Set to 0 to disable key cache.
# key_cache_size_in_mb:

# Duration in seconds after which Scylla should
# save the key cache. Caches are saved to saved_caches_directory as
# specified in this configuration file.
#
# Saved caches greatly improve cold-start speeds, and is relatively cheap in
# terms of I/O for the key cache. Row cache saving is much more expensive and
# has limited use.
#
# Default is 14400 or 4 hours.
# key_cache_save_period: 14400

# Number of keys from the key cache to save
# Disabled by default, meaning all keys are going to be saved
# key_cache_keys_to_save: 100

# Maximum size of the row cache in memory.
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
#
# Default value is 0, to disable row caching.
# row_cache_size_in_mb: 0

# Duration in seconds after which Scylla should
# save the row cache. Caches are saved to saved_caches_directory as specified
# in this configuration file.
#
# Saved caches greatly improve cold-start speeds, and is relatively cheap in
# terms of I/O for the key cache. Row cache saving is much more expensive and
# has limited use.
#
# Default is 0 to disable saving the row cache.
# row_cache_save_period: 0

# Number of keys from the row cache to save
# Disabled by default, meaning all keys are going to be saved
# row_cache_keys_to_save: 100

# Maximum size of the counter cache in memory.
#
# Counter cache helps to reduce counter locks' contention for hot counter cells.
# In case of RF = 1 a counter cache hit will cause Scylla to skip the read before
# write entirely. With RF > 1 a counter cache hit will still help to reduce the duration
# of the lock hold, helping with hot counter cell updates, but will not allow skipping
# the read entirely. Only the local (clock, count) tuple of a counter cell is kept
# in memory, not the whole counter, so it's relatively cheap.
#
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
#
# Default value is empty to make it "auto" (min(2.5% of Heap (in MB), 50MB)). Set to 0 to disable counter cache.
# NOTE: if you perform counter deletes and rely on low gcgs, you should disable the counter cache.
# counter_cache_size_in_mb:

# Duration in seconds after which Scylla should
# save the counter cache (keys only). Caches are saved to saved_caches_directory as
# specified in this configuration file.
#
# Default is 7200 or 2 hours.
# counter_cache_save_period: 7200

# Number of keys from the counter cache to save
# Disabled by default, meaning all keys are going to be saved
# counter_cache_keys_to_save: 100

# The off-heap memory allocator.  Affects storage engine metadata as
# well as caches.  Experiments show that JEMAlloc saves some memory
# than the native GCC allocator (i.e., JEMalloc is more
# fragmentation-resistant).
#
# Supported values are: NativeAllocator, JEMallocAllocator
#
# If you intend to use JEMallocAllocator you have to install JEMalloc as library and
# modify cassandra-env.sh as directed in the file.
#
# Defaults to NativeAllocator
# memory_allocator: NativeAllocator

# saved caches
# If not set, the default directory is $CASSANDRA_HOME/data/saved_caches.
# saved_caches_directory: /var/lib/scylla/saved_caches



# For workloads with more data than can fit in memory, Scylla's
# bottleneck will be reads that need to fetch data from
# disk. "concurrent_reads" should be set to (16 * number_of_drives) in
# order to allow the operations to enqueue low enough in the stack
# that the OS and drives can reorder them. Same applies to
# "concurrent_counter_writes", since counter writes read the current
# values before incrementing and writing them back.
#
# On the other hand, since writes are almost never IO bound, the ideal
# number of "concurrent_writes" is dependent on the number of cores in
# your system; (8 * number_of_cores) is a good rule of thumb.
# concurrent_reads: 32
# concurrent_writes: 32
# concurrent_counter_writes: 32

# Total memory to use for sstable-reading buffers.  Defaults to
# the smaller of 1/4 of heap or 512MB.
# file_cache_size_in_mb: 512

# Total space to use for commitlogs.
#
# If space gets above this value (it will round up to the next nearest
# segment multiple), Scylla will flush every dirty CF in the oldest
# segment and remove it.  So a small total commitlog space will tend
# to cause more flush activity on less-active columnfamilies.
#
# A value of -1 (default) will automatically equate it to the total amount of memory
# available for Scylla.
commitlog_total_space_in_mb: -1

# A fixed memory pool size in MB for for SSTable index summaries. If left
# empty, this will default to 5% of the heap size. If the memory usage of
# all index summaries exceeds this limit, SSTables with low read rates will
# shrink their index summaries in order to meet this limit.  However, this
# is a best-effort process. In extreme conditions Scylla may need to use
# more than this amount of memory.
# index_summary_capacity_in_mb:

# How frequently index summaries should be resampled.  This is done
# periodically to redistribute memory from the fixed-size pool to sstables
# proportional their recent read rates.  Setting to -1 will disable this
# process, leaving existing index summaries at their current sampling level.
# index_summary_resize_interval_in_minutes: 60

# Whether to, when doing sequential writing, fsync() at intervals in
# order to force the operating system to flush the dirty
# buffers. Enable this to avoid sudden dirty buffer flushing from
# impacting read latencies. Almost always a good idea on SSDs; not
# necessarily on platters.
# trickle_fsync: false
# trickle_fsync_interval_in_kb: 10240

# TCP port, for commands and data
# For security reasons, you should not expose this port to the internet.  Firewall it if needed.
# storage_port: 7000

# SSL port, for encrypted communication.  Unused unless enabled in
# encryption_options
# For security reasons, you should not expose this port to the internet.  Firewall it if needed.
# ssl_storage_port: 7001

# listen_interface: eth0
# listen_interface_prefer_ipv6: false

# Internode authentication backend, implementing IInternodeAuthenticator;
# used to allow/disallow connections from peer nodes.
# internode_authenticator: org.apache.cassandra.auth.AllowAllInternodeAuthenticator

# Whether to start the native transport server.
# Please note that the address on which the native transport is bound is the
# same as the rpc_address. The port however is different and specified below.
# start_native_transport: true

# The maximum threads for handling requests when the native transport is used.
# This is similar to rpc_max_threads though the default differs slightly (and
# there is no native_transport_min_threads, idle threads will always be stopped
# after 30 seconds).
# native_transport_max_threads: 128
#
# The maximum size of allowed frame. Frame (requests) larger than this will
# be rejected as invalid. The default is 256MB.
# native_transport_max_frame_size_in_mb: 256

# The maximum number of concurrent client connections.
# The default is -1, which means unlimited.
# native_transport_max_concurrent_connections: -1

# The maximum number of concurrent client connections per source ip.
# The default is -1, which means unlimited.
# native_transport_max_concurrent_connections_per_ip: -1

# Whether to start the thrift rpc server.
# start_rpc: true

# enable or disable keepalive on rpc/native connections
# rpc_keepalive: true

# Scylla provides two out-of-the-box options for the RPC Server:
#
# sync  -> One thread per thrift connection. For a very large number of clients, memory
#          will be your limiting factor. On a 64 bit JVM, 180KB is the minimum stack size
#          per thread, and that will correspond to your use of virtual memory (but physical memory
#          may be limited depending on use of stack space).
#
# hsha  -> Stands for "half synchronous, half asynchronous." All thrift clients are handled
#          asynchronously using a small number of threads that does not vary with the amount
#          of thrift clients (and thus scales well to many clients). The rpc requests are still
#          synchronous (one thread per active request). If hsha is selected then it is essential
#          that rpc_max_threads is changed from the default value of unlimited.
#
# The default is sync because on Windows hsha is about 30% slower.  On Linux,
# sync/hsha performance is about the same, with hsha of course using less memory.
#
# Alternatively,  can provide your own RPC server by providing the fully-qualified class name
# of an o.a.c.t.TServerFactory that can create an instance of it.
# rpc_server_type: sync

# Uncomment rpc_min|max_thread to set request pool size limits.
#
# Regardless of your choice of RPC server (see above), the number of maximum requests in the
# RPC thread pool dictates how many concurrent requests are possible (but if you are using the sync
# RPC server, it also dictates the number of clients that can be connected at all).
#
# The default is unlimited and thus provides no protection against clients overwhelming the server. You are
# encouraged to set a maximum that makes sense for you in production, but do keep in mind that
# rpc_max_threads represents the maximum number of client requests this server may execute concurrently.
#
# rpc_min_threads: 16
# rpc_max_threads: 2048

# uncomment to set socket buffer sizes on rpc connections
# rpc_send_buff_size_in_bytes:
# rpc_recv_buff_size_in_bytes:

# Uncomment to set socket buffer size for internode communication
# Note that when setting this, the buffer size is limited by net.core.wmem_max
# and when not setting it it is defined by net.ipv4.tcp_wmem
# See:
# /proc/sys/net/core/wmem_max
# /proc/sys/net/core/rmem_max
# /proc/sys/net/ipv4/tcp_wmem
# /proc/sys/net/ipv4/tcp_wmem
# and: man tcp
# internode_send_buff_size_in_bytes:
# internode_recv_buff_size_in_bytes:

# Frame size for thrift (maximum message length).
# thrift_framed_transport_size_in_mb: 15

# Set to true to have Scylla create a hard link to each sstable
# flushed or streamed locally in a backups/ subdirectory of the
# keyspace data.  Removing these links is the operator's
# responsibility.
# incremental_backups: false

# Whether or not to take a snapshot before each compaction.  Be
# careful using this option, since Scylla won't clean up the
# snapshots for you.  Mostly useful if you're paranoid when there
# is a data format change.
# snapshot_before_compaction: false

# Whether or not a snapshot is taken of the data before keyspace truncation
# or dropping of column families. The STRONGLY advised default of true
# should be used to provide data safety. If you set this flag to false, you will
# lose data on truncation or drop.
# auto_snapshot: true

# When executing a scan, within or across a partition, we need to keep the
# tombstones seen in memory so we can return them to the coordinator, which
# will use them to make sure other replicas also know about the deleted rows.
# With workloads that generate a lot of tombstones, this can cause performance
# problems and even exaust the server heap.
# Adjust the thresholds here if you understand the dangers and want to
# scan more tombstones anyway.  These thresholds may also be adjusted at runtime
# using the StorageService mbean.
# tombstone_warn_threshold: 1000
# tombstone_failure_threshold: 100000

# Granularity of the collation index of rows within a partition.
# Increase if your rows are large, or if you have a very large
# number of rows per partition.  The competing goals are these:
#   1) a smaller granularity means more index entries are generated
#      and looking up rows withing the partition by collation column
#      is faster
#   2) but, Scylla will keep the collation index in memory for hot
#      rows (as part of the key cache), so a larger granularity means
#      you can cache more hot rows
# column_index_size_in_kb: 64


# Number of simultaneous compactions to allow, NOT including
# validation "compactions" for anti-entropy repair.  Simultaneous
# compactions can help preserve read performance in a mixed read/write
# workload, by mitigating the tendency of small sstables to accumulate
# during a single long running compactions. The default is usually
# fine and if you experience problems with compaction running too
# slowly or too fast, you should look at
# compaction_throughput_mb_per_sec first.
#
# concurrent_compactors defaults to the smaller of (number of disks,
# number of cores), with a minimum of 2 and a maximum of 8.
#
# If your data directories are backed by SSD, you should increase this
# to the number of cores.
#concurrent_compactors: 1

# Throttles compaction to the given total throughput across the entire
# system. The faster you insert data, the faster you need to compact in
# order to keep the sstable count down, but in general, setting this to
# 16 to 32 times the rate you are inserting data is more than sufficient.
# Setting this to 0 disables throttling. Note that this account for all types
# of compaction, including validation compaction.
# compaction_throughput_mb_per_sec: 16

# Log a warning when compacting partitions larger than this value
# compaction_large_partition_warning_threshold_mb: 100

# When compacting, the replacement sstable(s) can be opened before they
# are completely written, and used in place of the prior sstables for
# any range that has been written. This helps to smoothly transfer reads
# between the sstables, reducing page cache churn and keeping hot rows hot
# sstable_preemptive_open_interval_in_mb: 50

# Throttles all streaming file transfer between the datacenters,
# this setting allows users to throttle inter dc stream throughput in addition
# to throttling all network stream traffic as configured with
# stream_throughput_outbound_megabits_per_sec
# inter_dc_stream_throughput_outbound_megabits_per_sec:

# How long the coordinator should wait for seq or index scans to complete
# range_request_timeout_in_ms: 10000
# How long the coordinator should wait for writes to complete
# counter_write_request_timeout_in_ms: 5000
# How long a coordinator should continue to retry a CAS operation
# that contends with other proposals for the same row
# cas_contention_timeout_in_ms: 1000
# How long the coordinator should wait for truncates to complete
# (This can be much longer, because unless auto_snapshot is disabled
# we need to flush first so we can snapshot before removing the data.)
# truncate_request_timeout_in_ms: 60000
# The default timeout for other, miscellaneous operations
# request_timeout_in_ms: 10000

# Enable operation timeout information exchange between nodes to accurately
# measure request timeouts.  If disabled, replicas will assume that requests
# were forwarded to them instantly by the coordinator, which means that
# under overload conditions we will waste that much extra time processing
# already-timed-out requests.
#
# Warning: before enabling this property make sure to ntp is installed
# and the times are synchronized between the nodes.
# cross_node_timeout: false

# Enable socket timeout for streaming operation.
# When a timeout occurs during streaming, streaming is retried from the start
# of the current file. This _can_ involve re-streaming an important amount of
# data, so you should avoid setting the value too low.
# Default value is 0, which never timeout streams.
# streaming_socket_timeout_in_ms: 0

# controls how often to perform the more expensive part of host score
# calculation
# dynamic_snitch_update_interval_in_ms: 100

# controls how often to reset all host scores, allowing a bad host to
# possibly recover
# dynamic_snitch_reset_interval_in_ms: 600000

# if set greater than zero and read_repair_chance is < 1.0, this will allow
# 'pinning' of replicas to hosts in order to increase cache capacity.
# The badness threshold will control how much worse the pinned host has to be
# before the dynamic snitch will prefer other replicas over it.  This is
# expressed as a double which represents a percentage.  Thus, a value of
# 0.2 means Scylla would continue to prefer the static snitch values
# until the pinned host was 20% worse than the fastest.
# dynamic_snitch_badness_threshold: 0.1

# request_scheduler -- Set this to a class that implements
# RequestScheduler, which will schedule incoming client requests
# according to the specific policy. This is useful for multi-tenancy
# with a single Scylla cluster.
# NOTE: This is specifically for requests from the client and does
# not affect inter node communication.
# org.apache.cassandra.scheduler.NoScheduler - No scheduling takes place
# org.apache.cassandra.scheduler.RoundRobinScheduler - Round robin of
# client requests to a node with a separate queue for each
# request_scheduler_id. The scheduler is further customized by
# request_scheduler_options as described below.
# request_scheduler: org.apache.cassandra.scheduler.NoScheduler

# Scheduler Options vary based on the type of scheduler
# NoScheduler - Has no options
# RoundRobin
#  - throttle_limit -- The throttle_limit is the number of in-flight
#                      requests per client.  Requests beyond
#                      that limit are queued up until
#                      running requests can complete.
#                      The value of 80 here is twice the number of
#                      concurrent_reads + concurrent_writes.
#  - default_weight -- default_weight is optional and allows for
#                      overriding the default which is 1.
#  - weights -- Weights are optional and will default to 1 or the
#               overridden default_weight. The weight translates into how
#               many requests are handled during each turn of the
#               RoundRobin, based on the scheduler id.
#
# request_scheduler_options:
#    throttle_limit: 80
#    default_weight: 5
#    weights:
#      Keyspace1: 1
#      Keyspace2: 5

# request_scheduler_id -- An identifier based on which to perform
# the request scheduling. Currently the only valid option is keyspace.
# request_scheduler_id: keyspace

# Enable or disable inter-node encryption.
# You must also generate keys and provide the appropriate key and trust store locations and passwords.
# No custom encryption options are currently enabled. The available options are:
#
# The available internode options are : all, none, dc, rack
# If set to dc scylla  will encrypt the traffic between the DCs
# If set to rack scylla  will encrypt the traffic between the racks
#
# server_encryption_options:
#    internode_encryption: none
#    certificate: conf/scylla.crt
#    keyfile: conf/scylla.key
#    truststore: <none, use system trust>
#    require_client_auth: False
#    priority_string: <none, use default>

# enable or disable client/server encryption.
# client_encryption_options:
#    enabled: false
#    certificate: conf/scylla.crt
#    keyfile: conf/scylla.key
#    truststore: <none, use system trust>
#    require_client_auth: False
#    priority_string: <none, use default>

# internode_compression controls whether traffic between nodes is
# compressed.
# can be:  all  - all traffic is compressed
#          dc   - traffic between different datacenters is compressed
#          none - nothing is compressed.
# internode_compression: none

# Enable or disable tcp_nodelay for inter-dc communication.
# Disabling it will result in larger (but fewer) network packets being sent,
# reducing overhead from the TCP protocol itself, at the cost of increasing
# latency if you block for cross-datacenter responses.
# inter_dc_tcp_nodelay: false

# Relaxation of environment checks.
#
# Scylla places certain requirements on its environment.  If these requirements are
# not met, performance and reliability can be degraded.
#
# These requirements include:
#    - A filesystem with good support for aysnchronous I/O (AIO). Currently,
#      this means XFS.
#
# false: strict environment checks are in place; do not start if they are not met.
# true: relaxed environment checks; performance and reliability may degraade.
#
developer_mode: true


# Idle-time background processing
#
# Scylla can perform certain jobs in the background while the system is otherwise idle,
# freeing processor resources when there is other work to be done.
#
# defragment_memory_on_idle: true
#
# prometheus port
# By default, Scylla opens prometheus API port on port 9180
# setting the port to 0 will disable the prometheus API.
# prometheus_port: 9180
#
# prometheus address
# By default, Scylla binds all interfaces to the prometheus API
# It is possible to restrict the listening address to a specific one
# prometheus_address: 0.0.0.0

# Distribution of data among cores (shards) within a node
#
# Scylla distributes data within a node among shards, using a round-robin
# strategy:
#  [shard0] [shard1] ... [shardN-1] [shard0] [shard1] ... [shardN-1] ...
#
# Scylla versions 1.6 and below used just one repetition of the pattern;
# this intefered with data placement among nodes (vnodes).
#
# Scylla versions 1.7 and above use 4096 repetitions of the pattern; this
# provides for better data distribution.
#
# the value below is log (base 2) of the number of repetitions.
#
# Set to 0 to avoid rewriting all data when upgrading from Scylla 1.6 and
# below.
#
# Keep at 12 for new clusters.
murmur3_partitioner_ignore_msb_bits: 12


second node config file
# Scylla storage config YAML

#######################################
# This file is split to two sections:
# 1. Supported parameters
# 2. Unsupported parameters: reserved for future use or backwards
#    compatibility.
# Scylla will only read and use the first segment
#######################################

### Supported Parameters

# The name of the cluster. This is mainly used to prevent machines in
# one logical cluster from joining another.
cluster_name: 'Test Cluster'

# This defines the number of tokens randomly assigned to this node on the ring
# The more tokens, relative to other nodes, the larger the proportion of data
# that this node will store. You probably want all nodes to have the same number
# of tokens assuming they have equal hardware capability.
#
# If you already have a cluster with 1 token per node, and wish to migrate to
# multiple tokens per node, see http://wiki.apache.org/cassandra/Operations
num_tokens: 256

# Directory where Scylla should store data on disk.
# If not set, the default directory is $CASSANDRA_HOME/data/data.
data_file_directories:
    - /var/lib/scylla/data

# commit log.  when running on magnetic HDD, this should be a
# separate spindle than the data directories.
# If not set, the default directory is $CASSANDRA_HOME/data/commitlog.
commitlog_directory: /var/lib/scylla/commitlog

# commitlog_sync may be either "periodic" or "batch."
#
# When in batch mode, Scylla won't ack writes until the commit log
# has been fsynced to disk.  It will wait
# commitlog_sync_batch_window_in_ms milliseconds between fsyncs.
# This window should be kept short because the writer threads will
# be unable to do extra work while waiting.  (You may need to increase
# concurrent_writes for the same reason.)
#
# commitlog_sync: batch
# commitlog_sync_batch_window_in_ms: 2
#
# the other option is "periodic" where writes may be acked immediately
# and the CommitLog is simply synced every commitlog_sync_period_in_ms
# milliseconds.
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000

# The size of the individual commitlog file segments.  A commitlog
# segment may be archived, deleted, or recycled once all the data
# in it (potentially from each columnfamily in the system) has been
# flushed to sstables.
#
# The default size is 32, which is almost always fine, but if you are
# archiving commitlog segments (see commitlog_archiving.properties),
# then you probably want a finer granularity of archiving; 8 or 16 MB
# is reasonable.
commitlog_segment_size_in_mb: 32

# seed_provider class_name is saved for future use.
# seeds address are mandatory!
seed_provider:
    # Addresses of hosts that are deemed contact points.
    # Scylla nodes use this list of hosts to find each other and learn
    # the topology of the ring.  You must change this if you are running
    # multiple nodes!
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          # seeds is actually a comma-delimited list of addresses.
          # Ex: "<ip1>,<ip2>,<ip3>"
          - seeds: "10.123.101.38,10.123.101.39"

# Address or interface to bind to and tell other Scylla nodes to connect to.
# You _must_ change this if you want multiple nodes to be able to communicate!
#
# Setting listen_address to 0.0.0.0 is always wrong.
listen_address: 10.123.101.39

# Address to broadcast to other Scylla nodes
# Leaving this blank will set it to the same value as listen_address
# broadcast_address: 1.2.3.4

# port for the CQL native transport to listen for clients on
# For security reasons, you should not expose this port to the internet.  Firewall it if needed.
native_transport_port: 9042

# Throttles all outbound streaming file transfers on this node to the
# given total throughput in Mbps. This is necessary because Scylla does
# mostly sequential IO when streaming data during bootstrap or repair, which
# can lead to saturating the network connection and degrading rpc performance.
# When unset, the default is 200 Mbps or 25 MB/s.
# stream_throughput_outbound_megabits_per_sec: 200

# How long the coordinator should wait for read operations to complete
read_request_timeout_in_ms: 5000

# How long the coordinator should wait for writes to complete
write_request_timeout_in_ms: 2000

# phi value that must be reached for a host to be marked down.
# most users should never need to adjust this.
# phi_convict_threshold: 8

# IEndpointSnitch.  The snitch has two functions:
# - it teaches Scylla enough about your network topology to route
#   requests efficiently
# - it allows Scylla to spread replicas around your cluster to avoid
#   correlated failures. It does this by grouping machines into
#   "datacenters" and "racks."  Scylla will do its best not to have
#   more than one replica on the same "rack" (which may not actually
#   be a physical location)
#
# IF YOU CHANGE THE SNITCH AFTER DATA IS INSERTED INTO THE CLUSTER,
# YOU MUST RUN A FULL REPAIR, SINCE THE SNITCH AFFECTS WHERE REPLICAS
# ARE PLACED.
#
# Out of the box, Scylla provides
#  - SimpleSnitch:
#    Treats Strategy order as proximity. This can improve cache
#    locality when disabling read repair.  Only appropriate for
#    single-datacenter deployments.
#  - GossipingPropertyFileSnitch
#    This should be your go-to snitch for production use.  The rack
#    and datacenter for the local node are defined in
#    cassandra-rackdc.properties and propagated to other nodes via
#    gossip.  If cassandra-topology.properties exists, it is used as a
#    fallback, allowing migration from the PropertyFileSnitch.
#  - PropertyFileSnitch:
#    Proximity is determined by rack and data center, which are
#    explicitly configured in cassandra-topology.properties.
#  - Ec2Snitch:
#    Appropriate for EC2 deployments in a single Region. Loads Region
#    and Availability Zone information from the EC2 API. The Region is
#    treated as the datacenter, and the Availability Zone as the rack.
#    Only private IPs are used, so this will not work across multiple
#    Regions.
#  - Ec2MultiRegionSnitch:
#    Uses public IPs as broadcast_address to allow cross-region
#    connectivity.  (Thus, you should set seed addresses to the public
#    IP as well.) You will need to open the storage_port or
#    ssl_storage_port on the public IP firewall.  (For intra-Region
#    traffic, Scylla will switch to the private IP after
#    establishing a connection.)
#  - RackInferringSnitch:
#    Proximity is determined by rack and data center, which are
#    assumed to correspond to the 3rd and 2nd octet of each node's IP
#    address, respectively.  Unless this happens to match your
#    deployment conventions, this is best used as an example of
#    writing a custom Snitch class and is provided in that spirit.
#
# You can use a custom Snitch by setting this to the full class name
# of the snitch, which will be assumed to be on your classpath.
endpoint_snitch: SimpleSnitch

# The address or interface to bind the Thrift RPC service and native transport
# server to.
#
# Set rpc_address OR rpc_interface, not both. Interfaces must correspond
# to a single address, IP aliasing is not supported.
#
# Leaving rpc_address blank has the same effect as on listen_address
# (i.e. it will be based on the configured hostname of the node).
#
# Note that unlike listen_address, you can specify 0.0.0.0, but you must also
# set broadcast_rpc_address to a value other than 0.0.0.0.
#
# For security reasons, you should not expose this port to the internet.  Firewall it if needed.
#
# If you choose to specify the interface by name and the interface has an ipv4 and an ipv6 address
# you can specify which should be chosen using rpc_interface_prefer_ipv6. If false the first ipv4
# address will be used. If true the first ipv6 address will be used. Defaults to false preferring
# ipv4. If there is only one address it will be selected regardless of ipv4/ipv6.
rpc_address: 10.123.101.39
# rpc_interface: eth1
# rpc_interface_prefer_ipv6: false

# port for Thrift to listen for clients on
rpc_port: 9160

# port for REST API server
api_port: 10000

# IP for the REST API server
api_address: 10.123.101.39

# Log WARN on any batch size exceeding this value. 5kb per batch by default.
# Caution should be taken on increasing the size of this threshold as it can lead to node instability.
batch_size_warn_threshold_in_kb: 5

# Fail any multiple-partition batch exceeding this value. 50kb (10x warn threshold) by default.
batch_size_fail_threshold_in_kb: 50

# Authentication backend, identifying users
# Out of the box, Scylla provides org.apache.cassandra.auth.{AllowAllAuthenticator,
# PasswordAuthenticator}.
#
# - AllowAllAuthenticator performs no checks - set it to disable authentication.
# - PasswordAuthenticator relies on username/password pairs to authenticate
#   users. It keeps usernames and hashed passwords in system_auth.credentials table.
#   Please increase system_auth keyspace replication factor if you use this authenticator.
# authenticator: AllowAllAuthenticator

# Authorization backend, implementing IAuthorizer; used to limit access/provide permissions
# Out of the box, Scylla provides org.apache.cassandra.auth.{AllowAllAuthorizer,
# CassandraAuthorizer}.
#
# - AllowAllAuthorizer allows any action to any user - set it to disable authorization.
# - CassandraAuthorizer stores permissions in system_auth.permissions table. Please
#   increase system_auth keyspace replication factor if you use this authorizer.
# authorizer: AllowAllAuthorizer

# initial_token allows you to specify tokens manually.  While you can use # it with
# vnodes (num_tokens > 1, above) -- in which case you should provide a
# comma-separated list -- it's primarily used when adding nodes # to legacy clusters
# that do not have vnodes enabled.
# initial_token:

# RPC address to broadcast to drivers and other Scylla nodes. This cannot
# be set to 0.0.0.0. If left blank, this will be set to the value of
# rpc_address. If rpc_address is set to 0.0.0.0, broadcast_rpc_address must
# be set.
# broadcast_rpc_address: 1.2.3.4

###################################################
## Not currently supported, reserved for future use
###################################################

# May either be "true" or "false" to enable globally, or contain a list
# of data centers to enable per-datacenter.
# hinted_handoff_enabled: DC1,DC2
# hinted_handoff_enabled: true

# this defines the maximum amount of time a dead host will have hints
# generated.  After it has been dead this long, new hints for it will not be
# created until it has been seen alive and gone down again.
# max_hint_window_in_ms: 10800000 # 3 hours

# Maximum throttle in KBs per second, per delivery thread.  This will be
# reduced proportionally to the number of nodes in the cluster.  (If there
# are two nodes in the cluster, each delivery thread will use the maximum
# rate; if there are three, each will throttle to half of the maximum,
# since we expect two nodes to be delivering hints simultaneously.)
# hinted_handoff_throttle_in_kb: 1024
# Number of threads with which to deliver hints;
# Consider increasing this number when you have multi-dc deployments, since
# cross-dc handoff tends to be slower
# max_hints_delivery_threads: 2

# Maximum throttle in KBs per second, total. This will be
# reduced proportionally to the number of nodes in the cluster.
# batchlog_replay_throttle_in_kb: 1024

# Validity period for permissions cache (fetching permissions can be an
# expensive operation depending on the authorizer, CassandraAuthorizer is
# one example). Defaults to 2000, set to 0 to disable.
# Will be disabled automatically for AllowAllAuthorizer.
# permissions_validity_in_ms: 2000

# Refresh interval for permissions cache (if enabled).
# After this interval, cache entries become eligible for refresh. Upon next
# access, an async reload is scheduled and the old value returned until it
# completes. If permissions_validity_in_ms is non-zero, then this must be
# also.
# Defaults to the same value as permissions_validity_in_ms.
# permissions_update_interval_in_ms: 1000

# The partitioner is responsible for distributing groups of rows (by
# partition key) across nodes in the cluster.  You should leave this
# alone for new clusters.  The partitioner can NOT be changed without
# reloading all data, so when upgrading you should set this to the
# same partitioner you were already using.
#
# Besides Murmur3Partitioner, partitioners included for backwards
# compatibility include RandomPartitioner, ByteOrderedPartitioner, and
# OrderPreservingPartitioner.
#
partitioner: org.apache.cassandra.dht.Murmur3Partitioner

# Maximum size of the key cache in memory.
#
# Each key cache hit saves 1 seek and each row cache hit saves 2 seeks at the
# minimum, sometimes more. The key cache is fairly tiny for the amount of
# time it saves, so it's worthwhile to use it at large numbers.
# The row cache saves even more time, but must contain the entire row,
# so it is extremely space-intensive. It's best to only use the
# row cache if you have hot rows or static rows.
#
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
#
# Default value is empty to make it "auto" (min(5% of Heap (in MB), 100MB)). Set to 0 to disable key cache.
# key_cache_size_in_mb:

# Duration in seconds after which Scylla should
# save the key cache. Caches are saved to saved_caches_directory as
# specified in this configuration file.
#
# Saved caches greatly improve cold-start speeds, and is relatively cheap in
# terms of I/O for the key cache. Row cache saving is much more expensive and
# has limited use.
#
# Default is 14400 or 4 hours.
# key_cache_save_period: 14400

# Number of keys from the key cache to save
# Disabled by default, meaning all keys are going to be saved
# key_cache_keys_to_save: 100

# Maximum size of the row cache in memory.
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
#
# Default value is 0, to disable row caching.
# row_cache_size_in_mb: 0

# Duration in seconds after which Scylla should
# save the row cache. Caches are saved to saved_caches_directory as specified
# in this configuration file.
#
# Saved caches greatly improve cold-start speeds, and is relatively cheap in
# terms of I/O for the key cache. Row cache saving is much more expensive and
# has limited use.
#
# Default is 0 to disable saving the row cache.
# row_cache_save_period: 0

# Number of keys from the row cache to save
# Disabled by default, meaning all keys are going to be saved
# row_cache_keys_to_save: 100

# Maximum size of the counter cache in memory.
#
# Counter cache helps to reduce counter locks' contention for hot counter cells.
# In case of RF = 1 a counter cache hit will cause Scylla to skip the read before
# write entirely. With RF > 1 a counter cache hit will still help to reduce the duration
# of the lock hold, helping with hot counter cell updates, but will not allow skipping
# the read entirely. Only the local (clock, count) tuple of a counter cell is kept
# in memory, not the whole counter, so it's relatively cheap.
#
# NOTE: if you reduce the size, you may not get you hottest keys loaded on startup.
#
# Default value is empty to make it "auto" (min(2.5% of Heap (in MB), 50MB)). Set to 0 to disable counter cache.
# NOTE: if you perform counter deletes and rely on low gcgs, you should disable the counter cache.
# counter_cache_size_in_mb:

# Duration in seconds after which Scylla should
# save the counter cache (keys only). Caches are saved to saved_caches_directory as
# specified in this configuration file.
#
# Default is 7200 or 2 hours.
# counter_cache_save_period: 7200

# Number of keys from the counter cache to save
# Disabled by default, meaning all keys are going to be saved
# counter_cache_keys_to_save: 100

# The off-heap memory allocator.  Affects storage engine metadata as
# well as caches.  Experiments show that JEMAlloc saves some memory
# than the native GCC allocator (i.e., JEMalloc is more
# fragmentation-resistant).
#
# Supported values are: NativeAllocator, JEMallocAllocator
#
# If you intend to use JEMallocAllocator you have to install JEMalloc as library and
# modify cassandra-env.sh as directed in the file.
#
# Defaults to NativeAllocator
# memory_allocator: NativeAllocator

# saved caches
# If not set, the default directory is $CASSANDRA_HOME/data/saved_caches.
# saved_caches_directory: /var/lib/scylla/saved_caches



# For workloads with more data than can fit in memory, Scylla's
# bottleneck will be reads that need to fetch data from
# disk. "concurrent_reads" should be set to (16 * number_of_drives) in
# order to allow the operations to enqueue low enough in the stack
# that the OS and drives can reorder them. Same applies to
# "concurrent_counter_writes", since counter writes read the current
# values before incrementing and writing them back.
#
# On the other hand, since writes are almost never IO bound, the ideal
# number of "concurrent_writes" is dependent on the number of cores in
# your system; (8 * number_of_cores) is a good rule of thumb.
# concurrent_reads: 32
# concurrent_writes: 32
# concurrent_counter_writes: 32

# Total memory to use for sstable-reading buffers.  Defaults to
# the smaller of 1/4 of heap or 512MB.
# file_cache_size_in_mb: 512

# Total space to use for commitlogs.
#
# If space gets above this value (it will round up to the next nearest
# segment multiple), Scylla will flush every dirty CF in the oldest
# segment and remove it.  So a small total commitlog space will tend
# to cause more flush activity on less-active columnfamilies.
#
# A value of -1 (default) will automatically equate it to the total amount of memory
# available for Scylla.
commitlog_total_space_in_mb: -1

# A fixed memory pool size in MB for for SSTable index summaries. If left
# empty, this will default to 5% of the heap size. If the memory usage of
# all index summaries exceeds this limit, SSTables with low read rates will
# shrink their index summaries in order to meet this limit.  However, this
# is a best-effort process. In extreme conditions Scylla may need to use
# more than this amount of memory.
# index_summary_capacity_in_mb:

# How frequently index summaries should be resampled.  This is done
# periodically to redistribute memory from the fixed-size pool to sstables
# proportional their recent read rates.  Setting to -1 will disable this
# process, leaving existing index summaries at their current sampling level.
# index_summary_resize_interval_in_minutes: 60

# Whether to, when doing sequential writing, fsync() at intervals in
# order to force the operating system to flush the dirty
# buffers. Enable this to avoid sudden dirty buffer flushing from
# impacting read latencies. Almost always a good idea on SSDs; not
# necessarily on platters.
# trickle_fsync: false
# trickle_fsync_interval_in_kb: 10240

# TCP port, for commands and data
# For security reasons, you should not expose this port to the internet.  Firewall it if needed.
# storage_port: 7000

# SSL port, for encrypted communication.  Unused unless enabled in
# encryption_options
# For security reasons, you should not expose this port to the internet.  Firewall it if needed.
# ssl_storage_port: 7001

# listen_interface: eth0
# listen_interface_prefer_ipv6: false

# Internode authentication backend, implementing IInternodeAuthenticator;
# used to allow/disallow connections from peer nodes.
# internode_authenticator: org.apache.cassandra.auth.AllowAllInternodeAuthenticator

# Whether to start the native transport server.
# Please note that the address on which the native transport is bound is the
# same as the rpc_address. The port however is different and specified below.
# start_native_transport: true

# The maximum threads for handling requests when the native transport is used.
# This is similar to rpc_max_threads though the default differs slightly (and
# there is no native_transport_min_threads, idle threads will always be stopped
# after 30 seconds).
# native_transport_max_threads: 128
#
# The maximum size of allowed frame. Frame (requests) larger than this will
# be rejected as invalid. The default is 256MB.
# native_transport_max_frame_size_in_mb: 256

# The maximum number of concurrent client connections.
# The default is -1, which means unlimited.
# native_transport_max_concurrent_connections: -1

# The maximum number of concurrent client connections per source ip.
# The default is -1, which means unlimited.
# native_transport_max_concurrent_connections_per_ip: -1

# Whether to start the thrift rpc server.
# start_rpc: true

# enable or disable keepalive on rpc/native connections
# rpc_keepalive: true

# Scylla provides two out-of-the-box options for the RPC Server:
#
# sync  -> One thread per thrift connection. For a very large number of clients, memory
#          will be your limiting factor. On a 64 bit JVM, 180KB is the minimum stack size
#          per thread, and that will correspond to your use of virtual memory (but physical memory
#          may be limited depending on use of stack space).
#
# hsha  -> Stands for "half synchronous, half asynchronous." All thrift clients are handled
#          asynchronously using a small number of threads that does not vary with the amount
#          of thrift clients (and thus scales well to many clients). The rpc requests are still
#          synchronous (one thread per active request). If hsha is selected then it is essential
#          that rpc_max_threads is changed from the default value of unlimited.
#
# The default is sync because on Windows hsha is about 30% slower.  On Linux,
# sync/hsha performance is about the same, with hsha of course using less memory.
#
# Alternatively,  can provide your own RPC server by providing the fully-qualified class name
# of an o.a.c.t.TServerFactory that can create an instance of it.
# rpc_server_type: sync

# Uncomment rpc_min|max_thread to set request pool size limits.
#
# Regardless of your choice of RPC server (see above), the number of maximum requests in the
# RPC thread pool dictates how many concurrent requests are possible (but if you are using the sync
# RPC server, it also dictates the number of clients that can be connected at all).
#
# The default is unlimited and thus provides no protection against clients overwhelming the server. You are
# encouraged to set a maximum that makes sense for you in production, but do keep in mind that
# rpc_max_threads represents the maximum number of client requests this server may execute concurrently.
#
# rpc_min_threads: 16
# rpc_max_threads: 2048

# uncomment to set socket buffer sizes on rpc connections
# rpc_send_buff_size_in_bytes:
# rpc_recv_buff_size_in_bytes:

# Uncomment to set socket buffer size for internode communication
# Note that when setting this, the buffer size is limited by net.core.wmem_max
# and when not setting it it is defined by net.ipv4.tcp_wmem
# See:
# /proc/sys/net/core/wmem_max
# /proc/sys/net/core/rmem_max
# /proc/sys/net/ipv4/tcp_wmem
# /proc/sys/net/ipv4/tcp_wmem
# and: man tcp
# internode_send_buff_size_in_bytes:
# internode_recv_buff_size_in_bytes:

# Frame size for thrift (maximum message length).
# thrift_framed_transport_size_in_mb: 15

# Set to true to have Scylla create a hard link to each sstable
# flushed or streamed locally in a backups/ subdirectory of the
# keyspace data.  Removing these links is the operator's
# responsibility.
# incremental_backups: false

# Whether or not to take a snapshot before each compaction.  Be
# careful using this option, since Scylla won't clean up the
# snapshots for you.  Mostly useful if you're paranoid when there
# is a data format change.
# snapshot_before_compaction: false

# Whether or not a snapshot is taken of the data before keyspace truncation
# or dropping of column families. The STRONGLY advised default of true
# should be used to provide data safety. If you set this flag to false, you will
# lose data on truncation or drop.
# auto_snapshot: true

# When executing a scan, within or across a partition, we need to keep the
# tombstones seen in memory so we can return them to the coordinator, which
# will use them to make sure other replicas also know about the deleted rows.
# With workloads that generate a lot of tombstones, this can cause performance
# problems and even exaust the server heap.
# Adjust the thresholds here if you understand the dangers and want to
# scan more tombstones anyway.  These thresholds may also be adjusted at runtime
# using the StorageService mbean.
# tombstone_warn_threshold: 1000
# tombstone_failure_threshold: 100000

# Granularity of the collation index of rows within a partition.
# Increase if your rows are large, or if you have a very large
# number of rows per partition.  The competing goals are these:
#   1) a smaller granularity means more index entries are generated
#      and looking up rows withing the partition by collation column
#      is faster
#   2) but, Scylla will keep the collation index in memory for hot
#      rows (as part of the key cache), so a larger granularity means
#      you can cache more hot rows
# column_index_size_in_kb: 64


# Number of simultaneous compactions to allow, NOT including
# validation "compactions" for anti-entropy repair.  Simultaneous
# compactions can help preserve read performance in a mixed read/write
# workload, by mitigating the tendency of small sstables to accumulate
# during a single long running compactions. The default is usually
# fine and if you experience problems with compaction running too
# slowly or too fast, you should look at
# compaction_throughput_mb_per_sec first.
#
# concurrent_compactors defaults to the smaller of (number of disks,
# number of cores), with a minimum of 2 and a maximum of 8.
#
# If your data directories are backed by SSD, you should increase this
# to the number of cores.
#concurrent_compactors: 1

# Throttles compaction to the given total throughput across the entire
# system. The faster you insert data, the faster you need to compact in
# order to keep the sstable count down, but in general, setting this to
# 16 to 32 times the rate you are inserting data is more than sufficient.
# Setting this to 0 disables throttling. Note that this account for all types
# of compaction, including validation compaction.
# compaction_throughput_mb_per_sec: 16

# Log a warning when compacting partitions larger than this value
# compaction_large_partition_warning_threshold_mb: 100

# When compacting, the replacement sstable(s) can be opened before they
# are completely written, and used in place of the prior sstables for
# any range that has been written. This helps to smoothly transfer reads
# between the sstables, reducing page cache churn and keeping hot rows hot
# sstable_preemptive_open_interval_in_mb: 50

# Throttles all streaming file transfer between the datacenters,
# this setting allows users to throttle inter dc stream throughput in addition
# to throttling all network stream traffic as configured with
# stream_throughput_outbound_megabits_per_sec
# inter_dc_stream_throughput_outbound_megabits_per_sec:

# How long the coordinator should wait for seq or index scans to complete
# range_request_timeout_in_ms: 10000
# How long the coordinator should wait for writes to complete
# counter_write_request_timeout_in_ms: 5000
# How long a coordinator should continue to retry a CAS operation
# that contends with other proposals for the same row
# cas_contention_timeout_in_ms: 1000
# How long the coordinator should wait for truncates to complete
# (This can be much longer, because unless auto_snapshot is disabled
# we need to flush first so we can snapshot before removing the data.)
# truncate_request_timeout_in_ms: 60000
# The default timeout for other, miscellaneous operations
# request_timeout_in_ms: 10000

# Enable operation timeout information exchange between nodes to accurately
# measure request timeouts.  If disabled, replicas will assume that requests
# were forwarded to them instantly by the coordinator, which means that
# under overload conditions we will waste that much extra time processing
# already-timed-out requests.
#
# Warning: before enabling this property make sure to ntp is installed
# and the times are synchronized between the nodes.
# cross_node_timeout: false

# Enable socket timeout for streaming operation.
# When a timeout occurs during streaming, streaming is retried from the start
# of the current file. This _can_ involve re-streaming an important amount of
# data, so you should avoid setting the value too low.
# Default value is 0, which never timeout streams.
# streaming_socket_timeout_in_ms: 0

# controls how often to perform the more expensive part of host score
# calculation
# dynamic_snitch_update_interval_in_ms: 100

# controls how often to reset all host scores, allowing a bad host to
# possibly recover
# dynamic_snitch_reset_interval_in_ms: 600000

# if set greater than zero and read_repair_chance is < 1.0, this will allow
# 'pinning' of replicas to hosts in order to increase cache capacity.
# The badness threshold will control how much worse the pinned host has to be
# before the dynamic snitch will prefer other replicas over it.  This is
# expressed as a double which represents a percentage.  Thus, a value of
# 0.2 means Scylla would continue to prefer the static snitch values
# until the pinned host was 20% worse than the fastest.
# dynamic_snitch_badness_threshold: 0.1

# request_scheduler -- Set this to a class that implements
# RequestScheduler, which will schedule incoming client requests
# according to the specific policy. This is useful for multi-tenancy
# with a single Scylla cluster.
# NOTE: This is specifically for requests from the client and does
# not affect inter node communication.
# org.apache.cassandra.scheduler.NoScheduler - No scheduling takes place
# org.apache.cassandra.scheduler.RoundRobinScheduler - Round robin of
# client requests to a node with a separate queue for each
# request_scheduler_id. The scheduler is further customized by
# request_scheduler_options as described below.
# request_scheduler: org.apache.cassandra.scheduler.NoScheduler

# Scheduler Options vary based on the type of scheduler
# NoScheduler - Has no options
# RoundRobin
#  - throttle_limit -- The throttle_limit is the number of in-flight
#                      requests per client.  Requests beyond
#                      that limit are queued up until
#                      running requests can complete.
#                      The value of 80 here is twice the number of
#                      concurrent_reads + concurrent_writes.
#  - default_weight -- default_weight is optional and allows for
#                      overriding the default which is 1.
#  - weights -- Weights are optional and will default to 1 or the
#               overridden default_weight. The weight translates into how
#               many requests are handled during each turn of the
#               RoundRobin, based on the scheduler id.
#
# request_scheduler_options:
#    throttle_limit: 80
#    default_weight: 5
#    weights:
#      Keyspace1: 1
#      Keyspace2: 5

# request_scheduler_id -- An identifier based on which to perform
# the request scheduling. Currently the only valid option is keyspace.
# request_scheduler_id: keyspace

# Enable or disable inter-node encryption.
# You must also generate keys and provide the appropriate key and trust store locations and passwords.
# No custom encryption options are currently enabled. The available options are:
#
# The available internode options are : all, none, dc, rack
# If set to dc scylla  will encrypt the traffic between the DCs
# If set to rack scylla  will encrypt the traffic between the racks
#
# server_encryption_options:
#    internode_encryption: none
#    certificate: conf/scylla.crt
#    keyfile: conf/scylla.key
#    truststore: <none, use system trust>
#    require_client_auth: False
#    priority_string: <none, use default>

# enable or disable client/server encryption.
# client_encryption_options:
#    enabled: false
#    certificate: conf/scylla.crt
#    keyfile: conf/scylla.key
#    truststore: <none, use system trust>
#    require_client_auth: False
#    priority_string: <none, use default>

# internode_compression controls whether traffic between nodes is
# compressed.
# can be:  all  - all traffic is compressed
#          dc   - traffic between different datacenters is compressed
#          none - nothing is compressed.
# internode_compression: none

# Enable or disable tcp_nodelay for inter-dc communication.
# Disabling it will result in larger (but fewer) network packets being sent,
# reducing overhead from the TCP protocol itself, at the cost of increasing
# latency if you block for cross-datacenter responses.
# inter_dc_tcp_nodelay: false

# Relaxation of environment checks.
#
# Scylla places certain requirements on its environment.  If these requirements are
# not met, performance and reliability can be degraded.
#
# These requirements include:
#    - A filesystem with good support for aysnchronous I/O (AIO). Currently,
#      this means XFS.
#
# false: strict environment checks are in place; do not start if they are not met.
# true: relaxed environment checks; performance and reliability may degraade.
#
developer_mode: true


# Idle-time background processing
#
# Scylla can perform certain jobs in the background while the system is otherwise idle,
# freeing processor resources when there is other work to be done.
#
# defragment_memory_on_idle: true
#
# prometheus port
# By default, Scylla opens prometheus API port on port 9180
# setting the port to 0 will disable the prometheus API.
# prometheus_port: 9180
#
# prometheus address
# By default, Scylla binds all interfaces to the prometheus API
# It is possible to restrict the listening address to a specific one
# prometheus_address: 0.0.0.0

# Distribution of data among cores (shards) within a node
#
# Scylla distributes data within a node among shards, using a round-robin
# strategy:
#  [shard0] [shard1] ... [shardN-1] [shard0] [shard1] ... [shardN-1] ...
#
# Scylla versions 1.6 and below used just one repetition of the pattern;
# this intefered with data placement among nodes (vnodes).
#
# Scylla versions 1.7 and above use 4096 repetitions of the pattern; this
# provides for better data distribution.
#
# the value below is log (base 2) of the number of repetitions.
#
# Set to 0 to avoid rewriting all data when upgrading from Scylla 1.6 and
# below.
#
# Keep at 12 for new clusters.
murmur3_partitioner_ignore_msb_bits: 12

both nodes cassandra-rackdc.properties config as below:
dc=gz
rack=rack1
prefer_local=true



在 2017年5月5日星期五 UTC+8下午2:35:23,Asias He写道:
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Asias

Ming Yan

<yming0221@gmail.com>
unread,
May 5, 2017, 3:59:37 AM5/5/17
to ScyllaDB users
yeah,

I removed the commented part in the config file.

first node config:
cluster_name: 'Test Cluster'
num_tokens: 256
data_file_directories:
    - /var/lib/scylla/data
commitlog_directory: /var/lib/scylla/commitlog
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
    # Addresses of hosts that are deemed contact points.
    # Scylla nodes use this list of hosts to find each other and learn
    # the topology of the ring.  You must change this if you are running
    # multiple nodes!
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          # seeds is actually a comma-delimited list of addresses.
          # Ex: "<ip1>,<ip2>,<ip3>"
          - seeds: "10.123.101.38,10.123.101.39"
listen_address: 10.123.101.38
native_transport_port: 9042
read_request_timeout_in_ms: 5000
write_request_timeout_in_ms: 2000
endpoint_snitch: SimpleSnitch
rpc_address: 10.123.101.38
rpc_port: 9160
api_port: 10000
api_address: 10.123.101.38
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
commitlog_total_space_in_mb: -1
developer_mode: true
murmur3_partitioner_ignore_msb_bits: 12

second one:
cluster_name: 'Test Cluster'
num_tokens: 256
data_file_directories:
    - /var/lib/scylla/data
commitlog_directory: /var/lib/scylla/commitlog
commitlog_sync: periodic
commitlog_sync_period_in_ms: 10000
commitlog_segment_size_in_mb: 32
seed_provider:
    # Addresses of hosts that are deemed contact points.
    # Scylla nodes use this list of hosts to find each other and learn
    # the topology of the ring.  You must change this if you are running
    # multiple nodes!
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          # seeds is actually a comma-delimited list of addresses.
          # Ex: "<ip1>,<ip2>,<ip3>"
          - seeds: "10.123.101.38,10.123.101.39"
listen_address: 10.123.101.39
native_transport_port: 9042
read_request_timeout_in_ms: 5000
write_request_timeout_in_ms: 2000
endpoint_snitch: SimpleSnitch
rpc_address: 10.123.101.39
rpc_port: 9160
api_port: 10000
api_address: 10.123.101.39
batch_size_warn_threshold_in_kb: 5
batch_size_fail_threshold_in_kb: 50
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
commitlog_total_space_in_mb: -1
developer_mode: true
murmur3_partitioner_ignore_msb_bits: 12

both node cassandra-rackdc.properties are the same:
dc=gz
rack=rack1
prefer_local=true



在 2017年5月5日星期五 UTC+8下午2:35:23,Asias He写道:
Hello, 
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.

Ming Yan

<yming0221@gmail.com>
unread,
May 5, 2017, 4:18:07 AM5/5/17
to ScyllaDB users
after start up first nodes

nodetool status of the first node
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns (effective)  Host ID                               Rack
UN  10.123.101.38  80.02 KB   256     100.0%            63fd7bf6-b10c-4795-941f-7dbe2d98402a  rack1

after start up second node:
nodetool status of first node
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns (effective)  Host ID                               Rack
?N  10.123.101.39  17 KB      256     100.0%            0351aed1-3b98-474a-b072-643ea3cfa7ef  rack1
UN  10.123.101.38  90.42 KB   256     100.0%            63fd7bf6-b10c-4795-941f-7dbe2d98402a  rack1

nodetool status of second node
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Tokens  Owns (effective)  Host ID                               Rack
UN  10.123.101.39  17 KB      256     100.0%            0351aed1-3b98-474a-b072-643ea3cfa7ef  rack1

nodetool gossipinfo of first node
  generation:1493972090
  heartbeat:144
  HOST_ID:0351aed1-3b98-474a-b072-643ea3cfa7ef
  NET_VERSION:0
  RELEASE_VERSION:2.1.8
  X1:RANGE_TOMBSTONES,LARGE_PARTITIONS
  SCHEMA:d67b2246-ed05-3b51-9e68-65a3aa1968a7
  DC:datacenter1
  STATUS:NORMAL,917084972491257489
  RACK:rack1
  LOAD:81922
  RPC_ADDRESS:10.123.101.39
  generation:1493971879
  heartbeat:361
  HOST_ID:63fd7bf6-b10c-4795-941f-7dbe2d98402a
  NET_VERSION:0
  RELEASE_VERSION:2.1.8
  X1:RANGE_TOMBSTONES,LARGE_PARTITIONS
  SCHEMA:aa3f79c5-c28d-38c9-adbd-a519eb41b3c1
  DC:datacenter1
  STATUS:NORMAL,950037460699368043
  RACK:rack1
  LOAD:92595
  RPC_ADDRESS:10.123.101.38

nodetool gossipinfo of second node
  generation:1493972090
  heartbeat:87
  RPC_ADDRESS:10.123.101.39
  X1:RANGE_TOMBSTONES,LARGE_PARTITIONS
  LOAD:81922
  NET_VERSION:0
  RELEASE_VERSION:2.1.8
  RACK:rack1
  SCHEMA:d67b2246-ed05-3b51-9e68-65a3aa1968a7
  HOST_ID:0351aed1-3b98-474a-b072-643ea3cfa7ef
  STATUS:NORMAL,917084972491257489
  DC:datacenter1


Many thanks.


在 2017年5月5日星期五 UTC+8下午2:35:23,Asias He写道:
Hello, 
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.

Ming Yan

<yming0221@gmail.com>
unread,
May 5, 2017, 5:05:10 AM5/5/17
to ScyllaDB users
the condition is everything runs good when I choose only one node as a seed, and I checked it has no firewall in the hostmachine.

在 2017年5月5日星期五 UTC+8下午3:59:35,Asias He写道:



--
Asias

Asias He

<asias@scylladb.com>
unread,
May 5, 2017, 5:30:14 AM5/5/17
to ScyllaDB users
It would be helpful if you can send a full log of both node1 and node2. Using two seed node should work. The error message suggests node1 has problem to send echo message to node2. 

To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Asias

Tomer Sandler

<tomer@scylladb.com>
unread,
May 5, 2017, 5:54:21 AM5/5/17
to scylladb-users@googlegroups.com
Asias, I see the api address was set to the node's ip. Why? It should remain default, which is 127.0.01, if I remember correctly. 
Could that cause the problem? 

--
Tomer Sandler
ScyllaDB

(Sent from my android)





--
Asias

--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.

Ming Yan

<yming0221@gmail.com>
unread,
May 5, 2017, 6:02:44 AM5/5/17
to ScyllaDB users
first node output info level log



--
Asias

Asias He

<asias@scylladb.com>
unread,
May 5, 2017, 6:26:12 AM5/5/17
to scylladb-users@googlegroups.com
Api address is fine to set to node ip. I use that all the time.

--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.

Amos Kong

<amos@scylladb.com>
unread,
May 5, 2017, 7:45:56 AM5/5/17
to scylladb-users@googlegroups.com
On Fri, May 5, 2017 at 6:26 PM, Asias He <as...@scylladb.com> wrote:
Api address is fine to set to node ip. I use that all the time.

It might cause the scylla-server fails to start:

--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.

For more options, visit https://groups.google.com/d/optout.



--
                        Amos.

Ming Yan

<yming0221@gmail.com>
unread,
May 5, 2017, 9:44:05 AM5/5/17
to ScyllaDB users
I start the nodes use scylla command alone, and not setup housekeeping service.

在 2017年5月5日星期五 UTC+8下午7:45:56,Amos Kong写道:



--
Asias

--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.

--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.

--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.



--
                        Amos.

Ming Yan

<yming0221@gmail.com>
unread,
May 5, 2017, 12:58:31 PM5/5/17
to ScyllaDB users
I just forgot to say my scylla runs in a chroot centos7 environment because of my server os is centos6, does it matter?



--
Asias

Ming Yan

<yming0221@gmail.com>
unread,
May 7, 2017, 4:31:29 AM5/7/17
to ScyllaDB users
Hi Asias,
I am sure the problem is my production runtime env. Because scylla works well when I tested in ubuntu 14.04 hosted by vmware workstation in my laptop.
And I notice the warn output in log when start scylla before in my production runtime env.

Scylla version 1.6.3-20170405.5ab8601 starting ...

WARN  2017-05-04 20:00:17,770 [shard 14] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 19] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 15] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 18] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 12] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 17] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 13] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 21] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 16] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 20] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 23] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 5] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 22] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 1] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 6] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,771 [shard 2] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 3] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 8] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 4] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 11] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,771 [shard 7] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 0] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 10] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,770 [shard 9] seastar - Exceptional future ignored: std::system_error (error system:101,

Network is unreachable)

WARN  2017-05-04 20:00:17,775 [shard 0] config - Unknown option batch_size_fail_threshold_in_kb ignored.

INFO  2017-05-04 20:00:17,790 [shard 0] init - Scylla API server listening on 10.151.212.28:10000 ...

INFO  2017-05-04 20:00:17,793 [shard 0] database - Row: max_vector_size: 32, internal_count: 5

INFO  2017-05-04 20:00:17,806 [shard 21] database - Row: max_vector_size: 32, internal_count: 5

INFO  2017-05-04 20:00:17,806 [shard 23] database - Row: max_vector_size: 32, internal_count: 5



I think seastar not working properly in my runtime env. 
The linux kernel is to old(2.6.31)? or the way I use chroot impact seastar?
Any hints will be greatly appreciated. :-)
Message has been deleted

Asias He

<asias@scylladb.com>
unread,
May 7, 2017, 4:48:44 AM5/7/17
to ScyllaDB users
On Sun, May 7, 2017 at 4:31 PM, Ming Yan <ymin...@gmail.com> wrote:
Hi Asias,
I am sure the problem is my production runtime env. Because scylla works well when I tested in ubuntu 14.04 hosted by vmware workstation in my laptop.

You run the same configuration (2 nodes 2 seeds node) with ubuntu 1404 in VMs. Everything works, right?

And I notice the warn output in log when start scylla before in my production runtime env.

I never tired running scylla in chroot. The following log does look like there is a networking issue running inside chroot.  As a starter, I would recommend not to try this unsupported deploy scenario. Perhaps, you can plug another disk to your server, or use anther partition to install a clean centos 7 or ubuntu 1404 system and install scylla.

 
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Asias

Ming Yan

<yming0221@gmail.com>
unread,
May 7, 2017, 5:15:15 AM5/7/17
to ScyllaDB users
You run the same configuration (2 nodes 2 seeds node) with ubuntu 1404 in VMs. Everything works, right?
Right.

I never tired running scylla in chroot. The following log does look like there is a networking issue running inside chroot.  As a starter, I would recommend not to try this unsupported deploy scenario. Perhaps, you can plug another disk to your server, or use anther partition to install a clean centos 7 or ubuntu 1404 system and install scylla.

Thanks for advise.
I will learn the source code to find out the reason if I have time.



--
Asias

Ming Yan

<yming0221@gmail.com>
unread,
May 9, 2017, 2:39:59 AM5/9/17
to ScyllaDB users
The problem is gone when I upgrade my kernel from 2.6.32 to 3.10.
Thanks for focus.



--
Asias

Asias He

<asias@scylladb.com>
unread,
May 9, 2017, 3:15:53 AM5/9/17
to scylladb-users@googlegroups.com
Thanks for the update.  If you are interested in why it was not working. You can use strace to see which network related syscall failed.


To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages