[QUEUED scylladb next] docs: Update Scylla to ScyllaDB in *all* RST docs files v3

0 views
Skip to first unread message

Commit Bot

<bot@cloudius-systems.com>
unread,
Jul 1, 2024, 12:05:28 PMJul 1
to scylladb-dev@googlegroups.com, Tzach Livyatan
From: Tzach Livyatan <tz...@scylladb.com>
Committer: Tomasz Grabiec <tgra...@scylladb.com>
Branch: next

docs: Update Scylla to ScyllaDB in *all* RST docs files v3

Closes scylladb/scylladb#19578

---
diff --git a/docs/architecture/anti-entropy/hinted-handoff.rst b/docs/architecture/anti-entropy/hinted-handoff.rst
--- a/docs/architecture/anti-entropy/hinted-handoff.rst
+++ b/docs/architecture/anti-entropy/hinted-handoff.rst
@@ -1,14 +1,14 @@
ScyllaDB Hinted Handoff
========================

-A typical write in Scylla works according to the scenarios described in our :doc:`Fault Tolerance documentation </architecture/architecture-fault-tolerance>`.
+A typical write in ScyllaDB works according to the scenarios described in our :doc:`Fault Tolerance documentation </architecture/architecture-fault-tolerance>`.

-But what happens when a write request is sent to a Scylla node that is unresponsive due to reasons including heavy write load on a node, network issues, or even hardware failure? To ensure availability and consistency, Scylla implements :term:`hinted handoff<Hinted Handoff>`.
+But what happens when a write request is sent to a ScyllaDB node that is unresponsive due to reasons including heavy write load on a node, network issues, or even hardware failure? To ensure availability and consistency, ScyllaDB implements :term:`hinted handoff<Hinted Handoff>`.

:term:`Hint<Hint>` = target replica ID + :term:`mutation<Mutation>` data


-In other words, Scylla saves a copy of the writes intended for down nodes, and replays them to the nodes when they are up later. Thus, the write operation flow, when a node is down, looks like this:
+In other words, ScyllaDB saves a copy of the writes intended for down nodes, and replays them to the nodes when they are up later. Thus, the write operation flow, when a node is down, looks like this:

1. The co-ordinator determines all the replica nodes;

@@ -39,7 +39,7 @@ Hinted handoff is enabled and managed by these settings in :code:`scylla.yaml`:

* :code:`hinted_handoff_enabled`: enables or disables the hinted handoff feature completely or enumerates data centers where hints are allowed. By default, “true” enables hints to all nodes.
* :code:`max_hint_window_in_ms`: do not generate hints if the destination node has been down for more than this value. If a node is down longer than this period, new hints are not created. Hint generation resumes once the destination node is back up. By default, this is set to 3 hours.
-* :code:`hints_directory`: the directory where Scylla will store hints. By default this is :code:`$SCYLLA_HOME/hints`.
+* :code:`hints_directory`: the directory where ScyllaDB will store hints. By default this is :code:`$SCYLLA_HOME/hints`.

Storing of the hint can also fail. Enabling hinted handoff therefore does not eliminate the need for repair; a user must recurrently :doc:`run a full repair </operating-scylla/procedures/maintenance/repair/>` to ensure data consistency across the cluster nodes.

diff --git a/docs/architecture/anti-entropy/index.rst b/docs/architecture/anti-entropy/index.rst
--- a/docs/architecture/anti-entropy/index.rst
+++ b/docs/architecture/anti-entropy/index.rst
@@ -1,16 +1,16 @@
-Scylla Anti-Entropy
-===================
+ScyllaDB Anti-Entropy
+=====================

.. toctree::
:hidden:
:glob:

- Scylla Hinted Handoff <hinted-handoff/>
- Scylla Read Repair <read-repair/>
- Scylla Repair </operating-scylla/procedures/maintenance/repair/>
+ ScyllaDB Hinted Handoff <hinted-handoff/>
+ ScyllaDB Read Repair <read-repair/>
+ ScyllaDB Repair </operating-scylla/procedures/maintenance/repair/>


-Scylla replicates data according to :term:`eventual consistency<Eventual Consistency>`. This means that, in Scylla, when considering the :term:`CAP Theorem<CAP Theorem>`, availability and partition tolerance are considered a higher priority over consistency. Although Scylla’s tunable consistency allows users to make a tradeoff between availability and consistency, Scylla’s :term:`consistency level<Consistency Level (CL)>` is tunable per query.
+ScyllaDB replicates data according to :term:`eventual consistency<Eventual Consistency>`. This means that, in ScyllaDB, when considering the :term:`CAP Theorem<CAP Theorem>`, availability and partition tolerance are considered a higher priority over consistency. Although ScyllaDB’s tunable consistency allows users to make a tradeoff between availability and consistency, ScyllaDB’s :term:`consistency level<Consistency Level (CL)>` is tunable per query.

However, over time, there can be a number of reasons for data inconsistencies, including:

@@ -21,14 +21,14 @@ However, over time, there can be a number of reasons for data inconsistencies, i
5. a replica that cannot write due to being out of resources;
6. file corruption.

-To mitigate :term:`entropy<Entropy>`, or data inconsistency, Scylla uses a few different processes. The goal of Scylla :term:`anti-entropy<Anti-Entropy>` - based on that of Apache Cassandra - is to compare data on all replicas, synchronize data between all replicas, and, finally, ensure each replica has the most recent data.
+To mitigate :term:`entropy<Entropy>`, or data inconsistency, ScyllaDB uses a few different processes. The goal of ScyllaDB :term:`anti-entropy<Anti-Entropy>` - based on that of Apache Cassandra - is to compare data on all replicas, synchronize data between all replicas, and, finally, ensure each replica has the most recent data.

Anti-entropy measures include *write-time* changes such as :term:`hinted handoff<Hinted Handoff>`, *read-time* changes such as :term:`read repair<Read Repair>`, and finally, periodic maintenance via :term:`repair<Repair>`.

-* :doc:`Scylla Hinted Handoff <hinted-handoff/>` - High-Level view of Scylla Hinted Handoff
-* :doc:`Scylla Read Repair <read-repair/>` - High-Level view of Scylla Read Repair
-* :doc:`Scylla Repair </operating-scylla/procedures/maintenance/repair/>` - Description of Scylla Repair
+* :doc:`ScyllaDB Hinted Handoff <hinted-handoff/>` - High-Level view of ScyllaDB Hinted Handoff
+* :doc:`ScyllaDB Read Repair <read-repair/>` - High-Level view of ScyllaDB Read Repair
+* :doc:`ScyllaDB Repair </operating-scylla/procedures/maintenance/repair/>` - Description of ScyllaDB Repair

-Also learn more in the `Cluster Management, Repair and Scylla Manager lesson <https://university.scylladb.com/courses/scylla-operations/lessons/cluster-management-repair-and-scylla-manager/topic/cluster-management-repair-and-scylla-manager/>`_ on Scylla University.
+Also learn more in the `Cluster Management, Repair and ScyllaDB Manager lesson <https://university.scylladb.com/courses/scylla-operations/lessons/cluster-management-repair-and-scylla-manager/topic/cluster-management-repair-and-scylla-manager/>`_ on ScyllaDB University.

.. include:: /rst_include/apache-copyrights.rst
diff --git a/docs/architecture/anti-entropy/read-repair.rst b/docs/architecture/anti-entropy/read-repair.rst
--- a/docs/architecture/anti-entropy/read-repair.rst
+++ b/docs/architecture/anti-entropy/read-repair.rst
@@ -3,7 +3,7 @@ ScyllaDB Read Repair

Read repair serves as an anti-entropy mechanism during read path.

-On read operations, Scylla runs a process called :term:`read repair<Read Repair>`, to ensure that replicas are updated with most recently updated data. Such repairs during read path run automatically, asynchronously, and in the background.
+On read operations, ScyllaDB runs a process called :term:`read repair<Read Repair>`, to ensure that replicas are updated with most recently updated data. Such repairs during read path run automatically, asynchronously, and in the background.

Note however, that if digest mismatch is detected before consistency level is reached, that repair will run in the foreground.

@@ -37,7 +37,7 @@ See the appendices below for the detailed flow.

.. image:: 4_read_repair.png

-* :doc:`Scylla Anti-Entropy </architecture/anti-entropy/index/>`
+* :doc:`ScyllaDB Anti-Entropy </architecture/anti-entropy/index/>`

Appendix
^^^^^^^^
diff --git a/docs/architecture/compaction/compaction-strategies.rst b/docs/architecture/compaction/compaction-strategies.rst
--- a/docs/architecture/compaction/compaction-strategies.rst
+++ b/docs/architecture/compaction/compaction-strategies.rst
@@ -3,7 +3,7 @@ Choose a Compaction Strategy
============================


-Scylla implements the following compaction strategies in order to reduce :term:`read amplification<Read Amplification>`, :term:`write amplification<Write Amplification>`, and :term:`space amplification<Space Amplification>`, which causes bottlenecks and poor performance. These strategies include:
+ScyllaDB implements the following compaction strategies in order to reduce :term:`read amplification<Read Amplification>`, :term:`write amplification<Write Amplification>`, and :term:`space amplification<Space Amplification>`, which causes bottlenecks and poor performance. These strategies include:

* `Size-tiered compaction strategy (STCS)`_ - triggered when the system has enough (four by default) similarly sized SSTables.
* `Leveled compaction strategy (LCS)`_ - the system uses small, fixed-size (by default 160 MB) SSTables distributed across different levels.
@@ -12,7 +12,7 @@ Scylla implements the following compaction strategies in order to reduce :term:`

This document covers how to choose a compaction strategy and presents the benefits and disadvantages of each one. If you want more information on compaction in general or on any of these strategies, refer to the :doc:`Compaction Overview </kb/compaction>`. If you want an explanation of the CQL commands used to create a compaction strategy, refer to :doc:`Compaction CQL Reference </cql/compaction>` .

-Learn more in the `Compaction Strategies lesson <https://university.scylladb.com/courses/scylla-operations/lessons/compaction-strategies/>`_ on Scylla University
+Learn more in the `Compaction Strategies lesson <https://university.scylladb.com/courses/scylla-operations/lessons/compaction-strategies/>`_ on ScyllaDB University

.. _STCS1:

@@ -197,6 +197,6 @@ References
----------
* :doc:`Compaction Overview </kb/compaction>` - contains in depth information on all of the strategies
* :doc:`Compaction CQL Reference </cql/compaction>` - covers the CQL parameters used for implementing compaction
-* Scylla Summit Tech Talk: `How to Ruin Performance by Choosing the Wrong Compaction Strategy <https://www.scylladb.com/tech-talk/ruin-performance-choosing-wrong-compaction-strategy-scylla-summit-2017/>`_
+* ScyllaDB Summit Tech Talk: `How to Ruin Performance by Choosing the Wrong Compaction Strategy <https://www.scylladb.com/tech-talk/ruin-performance-choosing-wrong-compaction-strategy-scylla-summit-2017/>`_


diff --git a/docs/architecture/console-CL-full-demo.rst b/docs/architecture/console-CL-full-demo.rst
--- a/docs/architecture/console-CL-full-demo.rst
+++ b/docs/architecture/console-CL-full-demo.rst
@@ -1,8 +1,8 @@
Consistency Level Console Demo
==============================
-In this demo, we'll bring up 3 nodes and demonstrate how writes and reads look, with tracing enabled in a cluster where our :term:`Replication Factor (RF)<Replication Factor (RF)>` is set to **3**. We'll change the :term:`Consistency Level (CL)<Consistency Level (CL)>` between operations to show how messages are passed between nodes, and finally take down a few nodes to show failure conditions in a Scylla cluster.
+In this demo, we'll bring up 3 nodes and demonstrate how writes and reads look, with tracing enabled in a cluster where our :term:`Replication Factor (RF)<Replication Factor (RF)>` is set to **3**. We'll change the :term:`Consistency Level (CL)<Consistency Level (CL)>` between operations to show how messages are passed between nodes, and finally take down a few nodes to show failure conditions in a ScyllaDB cluster.

-You can also learn more in the `High Availability lesson <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/>`_ on Scylla University.
+You can also learn more in the `High Availability lesson <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/>`_ on ScyllaDB University.


Note: We use asciinema_ to generate the console casts used in this demo. These asciicasts are more readable than embedded video, and allow you to copy the text or commands directly from the console to your clipboard. We suggest viewing console casts in **fullscreen** to see the properly formatted output.
diff --git a/docs/architecture/index.rst b/docs/architecture/index.rst
--- a/docs/architecture/index.rst
+++ b/docs/architecture/index.rst
@@ -18,8 +18,8 @@ ScyllaDB Architecture
* :doc:`ScyllaDB Ring Architecture </architecture/ringarchitecture/index/>` - High-Level view of ScyllaDB Ring Architecture
* :doc:`ScyllaDB Fault Tolerance </architecture/architecture-fault-tolerance>` - Deep dive into ScyllaDB Fault Tolerance
* :doc:`Consistency Level Console Demo </architecture/console-CL-full-demo>` - Console Demos of Consistency Level Settings
-* :doc:`Scylla Anti-Entropy </architecture/anti-entropy/index/>` - High-Level view of Scylla Anti-Entropy
-* :doc:`SSTable </architecture/sstable/index/>` - Scylla SSTable 2.0 and 3.0 Format Information
+* :doc:`ScyllaDB Anti-Entropy </architecture/anti-entropy/index/>` - High-Level view of ScyllaDB Anti-Entropy
+* :doc:`SSTable </architecture/sstable/index/>` - ScyllaDB SSTable 2.0 and 3.0 Format Information
* :doc:`Compaction Strategies </architecture/compaction/compaction-strategies>` - High-level analysis of different compaction strategies
* :doc:`Raft Consensus Algorithm in ScyllaDB </architecture/raft>` - Overview of how Raft is implemented in ScyllaDB.

diff --git a/docs/architecture/raft.rst b/docs/architecture/raft.rst
--- a/docs/architecture/raft.rst
+++ b/docs/architecture/raft.rst
@@ -67,7 +67,7 @@ version. Please consult the upgrade guide.
The Raft upgrade procedure requires **full cluster availability** to correctly setup the Raft algorithm; after the setup finishes, Raft can proceed with only a majority of nodes, but this initial setup is an exception.
An unlucky event, such as a hardware failure, may cause one of your nodes to fail. If this happens before the Raft upgrade procedure finishes, the procedure will get stuck and your intervention will be required.

-To verify that the procedure finishes, look at the log of every Scylla node (using ``journalctl _COMM=scylla``). Search for the following patterns:
+To verify that the procedure finishes, look at the log of every ScyllaDB node (using ``journalctl _COMM=scylla``). Search for the following patterns:

* ``Starting internal upgrade-to-raft procedure`` denotes the start of the procedure,
* ``Raft upgrade finished`` denotes the end.
@@ -252,6 +252,6 @@ Learn More About Raft
----------------------
* `The Raft Consensus Algorithm <https://raft.github.io/>`_
* `Achieving NoSQL Database Consistency with Raft in ScyllaDB <https://www.scylladb.com/tech-talk/achieving-nosql-database-consistency-with-raft-in-scylla/>`_ - A tech talk by Konstantin Osipov
-* `Making Schema Changes Safe with Raft <https://www.scylladb.com/presentations/making-schema-changes-safe-with-raft/>`_ - A Scylla Summit talk by Konstantin Osipov (register for access)
-* `The Future of Consensus in ScyllaDB 5.0 and Beyond <https://www.scylladb.com/presentations/the-future-of-consensus-in-scylladb-5-0-and-beyond/>`_ - A Scylla Summit talk by Tomasz Grabiec (register for access)
+* `Making Schema Changes Safe with Raft <https://www.scylladb.com/presentations/making-schema-changes-safe-with-raft/>`_ - A ScyllaDB Summit talk by Konstantin Osipov (register for access)
+* `The Future of Consensus in ScyllaDB 5.0 and Beyond <https://www.scylladb.com/presentations/the-future-of-consensus-in-scylladb-5-0-and-beyond/>`_ - A ScyllaDB Summit talk by Tomasz Grabiec (register for access)

diff --git a/docs/architecture/ringarchitecture/index.rst b/docs/architecture/ringarchitecture/index.rst
--- a/docs/architecture/ringarchitecture/index.rst
+++ b/docs/architecture/ringarchitecture/index.rst
@@ -1,13 +1,13 @@
ScyllaDB Ring Architecture - Overview
======================================

-Scylla is a database that scales out and up. Scylla adopted much of its distributed scale-out design from the Apache Cassandra project (which adopted distribution concepts from Amazon Dynamo and data modeling concepts from Google BigTable).
+ScyllaDB is a database that scales out and up. ScyllaDB adopted much of its distributed scale-out design from the Apache Cassandra project (which adopted distribution concepts from Amazon Dynamo and data modeling concepts from Google BigTable).

In the world of big data, a single node cannot hold the entire dataset and thus, a cluster of nodes is needed.

-A Scylla :term:`cluster<Cluster>` is a collection of :term:`nodes<Node>`, or Scylla instances, visualized as a ring. All of the nodes should be homogeneous using a shared-nothing approach. This article describes the design that determines how data is distributed among the cluster members.
+A ScyllaDB :term:`cluster<Cluster>` is a collection of :term:`nodes<Node>`, or ScyllaDB instances, visualized as a ring. All of the nodes should be homogeneous using a shared-nothing approach. This article describes the design that determines how data is distributed among the cluster members.

-A Scylla :term:`keyspace<Keyspace>` is a collection of tables with attributes that define how data is replicated on nodes. A keyspace is analogous to a database in SQL. When a new keyspace is created, the user sets a numerical attribute, the :term:`replication factor<Replication Factor (RF)>`, that defines how data is replicated on nodes. For example, an :abbr:`RF (Replication Factor)` of 2 means a given token or token range will be stored on 2 nodes (or replicated on one additional node). We will use an RF value of 2 in our examples.
+A ScyllaDB :term:`keyspace<Keyspace>` is a collection of tables with attributes that define how data is replicated on nodes. A keyspace is analogous to a database in SQL. When a new keyspace is created, the user sets a numerical attribute, the :term:`replication factor<Replication Factor (RF)>`, that defines how data is replicated on nodes. For example, an :abbr:`RF (Replication Factor)` of 2 means a given token or token range will be stored on 2 nodes (or replicated on one additional node). We will use an RF value of 2 in our examples.

A :term:`table<Table>` is a standard collection of columns and rows, as defined by a schema. Subsequently, when a table is created, using CQL (Cassandra Query Language) within a keyspace, a primary key is defined out of a subset of the table’s columns.

@@ -49,7 +49,7 @@ The hashed output of the partition key determines its placement within the clust

The figure above illustrates an example 0-1200 token range divided evenly amongst a three node cluster.

-Scylla, by default, uses the Murmur3 partitioner. With the MurmurHash3 function, the 64-bit hash values (produced for the partition key) range from |From| to |To|. This explains why there are also negative values in our ``nodetool ring`` output below.
+ScyllaDB, by default, uses the Murmur3 partitioner. With the MurmurHash3 function, the 64-bit hash values (produced for the partition key) range from |From| to |To|. This explains why there are also negative values in our ``nodetool ring`` output below.

.. |From| image:: CodeCogsEqn.gif
.. |To| image:: CodeCogsEqn-2.gif
@@ -58,9 +58,9 @@ Scylla, by default, uses the Murmur3 partitioner. With the MurmurHash3 function,

In the drawing above, each number represents a token range. With a replication factor of 2, we see that each node holds one range from the previous node, and one range from the next node.

-Note, however, that Scylla exclusively uses a Vnode-oriented architecture. A :term:`Virtual node` represents a contiguous range of tokens owned by a single Scylla node. A physical node may be assigned multiple, non-contiguous Vnodes.
+Note, however, that ScyllaDB exclusively uses a Vnode-oriented architecture. A :term:`Virtual node` represents a contiguous range of tokens owned by a single ScyllaDB node. A physical node may be assigned multiple, non-contiguous Vnodes.

-Scylla’s implementation of a Vnode oriented architecture provides several advantages. First of all, rebalancing a cluster is no longer required when adding or removing nodes. Secondly, as rebuilding can stream data from all available nodes (instead of just the nodes where data would reside on a one-token-per-node setup), Scylla can rebuild faster.
+ScyllaDB’s implementation of a Vnode oriented architecture provides several advantages. First of all, rebalancing a cluster is no longer required when adding or removing nodes. Secondly, as rebuilding can stream data from all available nodes (instead of just the nodes where data would reside on a one-token-per-node setup), ScyllaDB can rebuild faster.

.. image:: ring-architecture-5.png

@@ -113,7 +113,7 @@ We can also get information on our cluster with
Schema versions:
082bce63-be30-3e6b-9858-4fb243ce409c: [172.17.0.2, 172.17.0.3, 172.17.0.4]

-Learn more in the `Cluster Node Ring lesson <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/architecture/topic/cluster-node-ring/>`_ on Scylla University
+Learn more in the `Cluster Node Ring lesson <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/architecture/topic/cluster-node-ring/>`_ on ScyllaDB University

.. include:: /rst_include/apache-copyrights.rst

diff --git a/docs/architecture/sstable/index.rst b/docs/architecture/sstable/index.rst
--- a/docs/architecture/sstable/index.rst
+++ b/docs/architecture/sstable/index.rst
@@ -9,11 +9,11 @@ ScyllaDB SSTable Format

.. include:: _common/sstable_what_is.rst

-* In Scylla 6.0 and above, *me* format is enabled by default.
+* In ScyllaDB 6.0 and above, *me* format is enabled by default.

-* In Scylla Enterprise 2021.1, Scylla 4.3 and above, *md* format is enabled by default.
+* In ScyllaDB Enterprise 2021.1, ScyllaDB 4.3 and above, *md* format is enabled by default.

-* In Scylla 3.1 and above, *mc* format is enabled by default.
+* In ScyllaDB 3.1 and above, *mc* format is enabled by default.

For more information on each of the SSTable formats, see below:

diff --git a/docs/architecture/sstable/sstable2/index.rst b/docs/architecture/sstable/sstable2/index.rst
--- a/docs/architecture/sstable/sstable2/index.rst
+++ b/docs/architecture/sstable/sstable2/index.rst
@@ -12,13 +12,13 @@ ScyllaDB SSTable - 2.x

.. include:: ../_common/sstable_what_is.rst

-For more information about Scylla 2.x SSTable formats, see below:
+For more information about ScyllaDB 2.x SSTable formats, see below:


-* :doc:`SSTable Compression <sstable-compression>` - Deep dive into Scylla/Apache Cassandra SSTable Compression
-* :doc:`SSTable Data File <sstable-data-file>` - Deep dive into Scylla/Apache Cassandra SSTable format
-* :doc:`SSTable format in Scylla <sstable-format>` - Scylla SSTables are compatible to those in Apache Cassandra 2.1.8, but why there are more of them?
-* :doc:`SSTable Interpretation <sstable-interpretation>` - Deep dive into Scylla/Apache Cassandra SSTable Interpretation in Scylla
-* :doc:`SSTable Summary File <sstable-summary-file>` - Deep dive into Scylla/Apache Cassandra SSTable Summary file format
+* :doc:`SSTable Compression <sstable-compression>` - Deep dive into ScyllaDB/Apache Cassandra SSTable Compression
+* :doc:`SSTable Data File <sstable-data-file>` - Deep dive into ScyllaDB/Apache Cassandra SSTable format
+* :doc:`SSTable format in ScyllaDB <sstable-format>` - ScyllaDB SSTables are compatible to those in Apache Cassandra 2.1.8, but why there are more of them?
+* :doc:`SSTable Interpretation <sstable-interpretation>` - Deep dive into ScyllaDB/Apache Cassandra SSTable Interpretation in ScyllaDB
+* :doc:`SSTable Summary File <sstable-summary-file>` - Deep dive into ScyllaDB/Apache Cassandra SSTable Summary file format

.. include:: /rst_include/apache-copyrights.rst
diff --git a/docs/architecture/sstable/sstable2/sstable-format.rst b/docs/architecture/sstable/sstable2/sstable-format.rst
--- a/docs/architecture/sstable/sstable2/sstable-format.rst
+++ b/docs/architecture/sstable/sstable2/sstable-format.rst
@@ -2,12 +2,12 @@ SSTable format in ScyllaDB
===========================


-Scylla supports the same SSTable format as Apache Cassandra 2.1.8, which means
+ScyllaDB supports the same SSTable format as Apache Cassandra 2.1.8, which means
you can simply place SSTables from a Cassandra data directory into a
-Scylla data directory—and it will just work
+ScyllaDB data directory—and it will just work

-Looking more carefully, you will see that Scylla maintains more,
-smaller, SSTables than Cassandra does. On Scylla, each core manages its
+Looking more carefully, you will see that ScyllaDB maintains more,
+smaller, SSTables than Cassandra does. On ScyllaDB, each core manages its
own subset of SSTables. This internal sharding allows each core (shard)
to work more efficiently, avoiding the complexity and delays of multiple
cores competing for the same data
diff --git a/docs/architecture/sstable/sstable2/sstable-interpretation.rst b/docs/architecture/sstable/sstable2/sstable-interpretation.rst
--- a/docs/architecture/sstable/sstable2/sstable-interpretation.rst
+++ b/docs/architecture/sstable/sstable2/sstable-interpretation.rst
@@ -10,8 +10,8 @@ SSTable Interpretation
**Audience: Devops professionals, architects**

The SSTables Data File contains rows of data. This document discusses
-how to interpret the various fields described in :doc:`SSTables Data File </architecture/sstable/sstable2/sstable-data-file/>` in the context of Scylla, and how to
-convert this data into Scylla's native data structure:
+how to interpret the various fields described in :doc:`SSTables Data File </architecture/sstable/sstable2/sstable-data-file/>` in the context of ScyllaDB, and how to
+convert this data into ScyllaDB's native data structure:
**mutation\_partition**.

SSTable Rows
@@ -28,7 +28,7 @@ As we'll explain below when discussing clustering columns, the best term
for what we read from one row in the SSTable isn't a "row", but rather a
**partition**.

-For these reasons, Scylla's internal representation for a row we read
+For these reasons, ScyllaDB's internal representation for a row we read
from the SSTable is called ``class mutation_partition``.

Column Names
@@ -38,8 +38,8 @@ As explained in :doc:`SSTables Data File </architecture/sstable/sstable2/sstable
sstable row (a mutation partition) is a list of *cells* (column values).
Each cell is preceded by the full column name. This was considered a
good idea when Apache Cassandra was designed to support rows with many and
-arbitrary columns, but Scylla is more oriented toward the CQL use case
-with a known schema. So Scylla's rows do not store the full column name,
+arbitrary columns, but ScyllaDB is more oriented toward the CQL use case
+with a known schema. So ScyllaDB's rows do not store the full column name,
but rather store a numeric ID which points to the known list of columns
from the schema. So as we read column names from the SSTable in form of IDs,
we need to translate the IDs into names by looking them up in the schema.
@@ -103,7 +103,7 @@ is not actually an empty string, but a composite with one empty
component, **()** (serialized on disk as ``'\000 \000 \000'``).

I hope we can simply ignore these CQL Row Marker cells, and not
-duplicate them in Scylla's internal format. We just need a different way
+duplicate them in ScyllaDB's internal format. We just need a different way
to allow empty rows (a row with only a key, but no data columns) to
exist, to circumvent the problems mentioned in CASSANDRA-4361 and the
comment in UpdateStatement.Java.
@@ -152,7 +152,7 @@ column names, but rather the *value* of the clustering column nick, and
only the last component, "age", is an actual name of a field from the
CQL schema.

-In Scylla nomenclature, this single **partition** (with key
+In ScyllaDB nomenclature, this single **partition** (with key
name="nadav") has multiple **rows**, each with a different value of the
clustering key (nick). Each of these rows has, as usual, columns whose
names are the fields from the CQL schema (and as explained above, are
@@ -220,7 +220,7 @@ So sstables have static columns specially marked by an empty first
component of the composite cell name. We need to verify that each such
cell actually corresponds to a known static column from the table's
schema, and collect all these static columns into one row
-(``_static_row``) stored in Scylla's ``mutation_partition``.
+(``_static_row``) stored in ScyllaDB's ``mutation_partition``.

TODO: CompositeType.java explains that static columns do not really have
an empty first component (size 0), but rather the first component has
@@ -436,7 +436,7 @@ operations, and negative for prepend operations. This ensures that, for
example, a later append always sorts after an earlier append - without
the append having to know which items already exist in the list.

-Scylla's internal storage of a collection in a mutation is the
+ScyllaDB's internal storage of a collection in a mutation is the
``class collection_mutation``, and we need to convert the above
described representation into that class. TODO: I still can't figure out
exactly what is the internal structure of our collection\_mutation
diff --git a/docs/architecture/sstable/sstable3/sstable-format.rst b/docs/architecture/sstable/sstable3/sstable-format.rst
--- a/docs/architecture/sstable/sstable3/sstable-format.rst
+++ b/docs/architecture/sstable/sstable3/sstable-format.rst
@@ -1,12 +1,12 @@
SSTable 3.0 Format in ScyllaDB
===============================

-Scylla supports the same SSTable format as Apache Cassandra 3.0.
-You can simply place SSTables from a Cassandra data directory into a Scylla uploads directory
+ScyllaDB supports the same SSTable format as Apache Cassandra 3.0.
+You can simply place SSTables from a Cassandra data directory into a ScyllaDB uploads directory
and use the ``nodetool refresh`` command to ingest their data into the table.

-Looking more carefully, you will see that Scylla maintains more,
-smaller, SSTables than Cassandra does. On Scylla, each core manages its
+Looking more carefully, you will see that ScyllaDB maintains more,
+smaller, SSTables than Cassandra does. On ScyllaDB, each core manages its
own subset of SSTables. This internal sharding allows each core (shard)
to work more efficiently, avoiding the complexity and delays of multiple
cores competing for the same data
diff --git a/docs/architecture/sstable/sstable3/sstables-3-data-file-format.rst b/docs/architecture/sstable/sstable3/sstables-3-data-file-format.rst
--- a/docs/architecture/sstable/sstable3/sstables-3-data-file-format.rst
+++ b/docs/architecture/sstable/sstable3/sstables-3-data-file-format.rst
@@ -233,9 +233,9 @@ If `EXTENSION_FLAG` is set, the following byte `extended_flags` is a bitwise-or
// Whether the encoded row is a static. If there is no extended flag, the row is assumed not static.
IS_STATIC = 0x01,
// Whether the row deletion is shadowable. If there is no extended flag (or no row deletion), the deletion is assumed not shadowable. This flag is deprecated - see CASSANDRA-11500.
- // This flag is not supported by Scylla and SSTables that have this flag set fail to be loaded.
+ // This flag is not supported by ScyllaDB and SSTables that have this flag set fail to be loaded.
HAS_SHADOWABLE_DELETION_CASSANDRA = 0x02,
- // A Scylla-specific flag (not supported by Cassandra) that indicates the presence of a shadowable tombstone.
+ // A ScyllaDB-specific flag (not supported by Cassandra) that indicates the presence of a shadowable tombstone.
// See below for details
HAS_SHADOWABLE_DELETION_SCYLLA = 0x80,
};
@@ -317,9 +317,9 @@ Shadowable Tombstones
Cassandra only maintains up to one tombstone for a row. In case if it is shadowable, Cassandra sets the corresponding HAS_SHADOWABLE_DELETION_CASSANDRA flag.

It turns out that this approach is imperfect and there are known issues with the current shadowable deletions support in Cassandra (see https://issues.apache.org/jira/browse/CASSANDRA-13826 for details).
-To address those, Scylla maintains a separate shadowable tombstone in addition to the regular one. That means a row can have up to two tombstones in Scylla-written SSTables. If the second tombstone is present, the Scylla-specific extended flag HAS_SHADOWABLE_DELETION_SCYLLA is set.
+To address those, ScyllaDB maintains a separate shadowable tombstone in addition to the regular one. That means a row can have up to two tombstones in ScyllaDB-written SSTables. If the second tombstone is present, the ScyllaDB-specific extended flag HAS_SHADOWABLE_DELETION_SCYLLA is set.

-Note that Cassandra does not know this flag and would consider those files invalid. This is deemed to be safe to do because shadowable tombstones can only appear in Materialized Views tables and those are not supposed to be ever exported and imported between Scylla and Cassandra.
+Note that Cassandra does not know this flag and would consider those files invalid. This is deemed to be safe to do because shadowable tombstones can only appear in Materialized Views tables and those are not supposed to be ever exported and imported between ScyllaDB and Cassandra.

Missing Columns Encoding
------------------------
@@ -342,7 +342,7 @@ If `columns.count() < superset.count() / 2`, the **present** columns indices are

Although the field is named `missing_columns`, one can see from the algorithm described above that in some cases the values stored are indices of present columns, not missing ones. This may be a bit confusing, but it helps to reason about it in the following way: whatever is stored can be used to get the list of missing columns.

-As of today, Scylla treats the whole set of columns as a superset regardless of whether all columns are ever filled or not. `See for details`_.
+As of today, ScyllaDB treats the whole set of columns as a superset regardless of whether all columns are ever filled or not. `See for details`_.

.. _`See for details`: https://github.com/scylladb/scylla/issues/3901

@@ -538,7 +538,7 @@ Shadowable Deletion
Initially, an extended `HAS_SHADOWABLE_DELETION` flag has been introduced in 3.0 format to solve a tricky problem described in [CASSANDRA-10261](https://issues.apache.org/jira/browse/CASSANDRA-10261). Later some other problems have been discovered ([CASSANDRA-11500](https://issues.apache.org/jira/browse/CASSANDRA-11500)) which led to a more generic approach that deprecated shadowable tombstones and used expired liveness info instead.

As a result, this flag is not supposed to be written for new SSTables by Cassandra.
-Scylla tracks the presence of this flag and fails to load files that have it set.
+ScyllaDB tracks the presence of this flag and fails to load files that have it set.


References
diff --git a/docs/architecture/sstable/sstable3/sstables-3-summary.rst b/docs/architecture/sstable/sstable3/sstables-3-summary.rst
--- a/docs/architecture/sstable/sstable3/sstables-3-summary.rst
+++ b/docs/architecture/sstable/sstable3/sstables-3-summary.rst
@@ -64,7 +64,7 @@ Summary Entries

The ``offsets`` array contains offsets of corresponding entries in the ``entries`` array below. The offsets are taken from the beginning of the ``summary_entries_block`` so ``offsets[0] == sizeof(uint32) * header.entries_count`` as the first entry begins right after the array of offsets.

-Note that ``offsets`` are written in the native order format although typically all the integers in SSTables files are written in big-endian. In Scylla, they are always written in little-endian order to allow interoperability with 1. Summary files written by Cassandra on the more common little-endian machines, and 2. Summary files written by Scylla on the rarer big-endian machines.
+Note that ``offsets`` are written in the native order format although typically all the integers in SSTables files are written in big-endian. In ScyllaDB, they are always written in little-endian order to allow interoperability with 1. Summary files written by Cassandra on the more common little-endian machines, and 2. Summary files written by ScyllaDB on the rarer big-endian machines.

Here is how a summary entry looks:

diff --git a/docs/contribute.rst b/docs/contribute.rst
--- a/docs/contribute.rst
+++ b/docs/contribute.rst
@@ -1,31 +1,31 @@
Contribute to ScyllaDB
=======================

-Thank you for your interest in making Scylla better!
-We appreciate your help and look forward to welcoming you to the Scylla Community.
+Thank you for your interest in making ScyllaDB better!
+We appreciate your help and look forward to welcoming you to the ScyllaDB Community.
There are two ways you can contribute:

-* Send a patch to the Scylla source code
-* Write documentation for Scylla Docs
+* Send a patch to the ScyllaDB source code
+* Write documentation for ScyllaDB Docs


-Contribute to Scylla's Source Code
-----------------------------------
-Scylla developers use patches and email to share and discuss changes.
+Contribute to ScyllaDB's Source Code
+------------------------------------
+ScyllaDB developers use patches and email to share and discuss changes.
Setting up can take a little time, but once you have done it the first time, it’s easy.

The basic steps are:

-* Join the Scylla community
+* Join the ScyllaDB community
* Create a Git branch to work on
* Commit your work with clear commit messages and sign-offs.
* Send a PR or use ``git format-patch`` and ``git send-email`` to send to the list


The entire process is `documented here <https://github.com/scylladb/scylla/blob/master/CONTRIBUTING.md>`_.

-Contribute to Scylla Docs
--------------------------
+Contribute to ScyllaDB Docs
+---------------------------

-Each Scylla project has accompanying documentation. For information about contributing documentation to a specific Scylla project, refer to the README file for the individual project.
-For general information or to contribute to the Scylla Sphinx theme, read the `Contributor's Guide <https://sphinx-theme.scylladb.com/stable/contribute/>`_.
\ No newline at end of file
+Each ScyllaDB project has accompanying documentation. For information about contributing documentation to a specific ScyllaDB project, refer to the README file for the individual project.
+For general information or to contribute to the ScyllaDB Sphinx theme, read the `Contributor's Guide <https://sphinx-theme.scylladb.com/stable/contribute/>`_.
\ No newline at end of file
diff --git a/docs/cql/compaction.rst b/docs/cql/compaction.rst
--- a/docs/cql/compaction.rst
+++ b/docs/cql/compaction.rst
@@ -5,11 +5,11 @@ Compaction
----------


-This document describes the compaction strategy options available when creating a table. For more information about creating a table in Scylla, refer to the :ref:`CQL Reference <create-table-statement>`.
+This document describes the compaction strategy options available when creating a table. For more information about creating a table in ScyllaDB, refer to the :ref:`CQL Reference <create-table-statement>`.

-By default, Scylla starts a compaction task whenever a new SSTable is written. Compaction merges several SSTables into a new SSTable, which contains only the live data from the input SSTables. Merging several sorted files to get a sorted result is an efficient process, and this is the main reason why SSTables are kept sorted.
+By default, ScyllaDB starts a compaction task whenever a new SSTable is written. Compaction merges several SSTables into a new SSTable, which contains only the live data from the input SSTables. Merging several sorted files to get a sorted result is an efficient process, and this is the main reason why SSTables are kept sorted.

-The following compaction strategies are supported by Scylla:
+The following compaction strategies are supported by ScyllaDB:

* Size-tiered Compaction Strategy (`STCS`_)

@@ -19,7 +19,7 @@ The following compaction strategies are supported by Scylla:

* Time-window Compaction Strategy (`TWCS`_)

-This page concentrates on the parameters to use when creating a table with a compaction strategy. If you are unsure which strategy to use or want general information on the compaction strategies which are available to Scylla, refer to :doc:`Compaction Strategies </architecture/compaction/compaction-strategies>`.
+This page concentrates on the parameters to use when creating a table with a compaction strategy. If you are unsure which strategy to use or want general information on the compaction strategies which are available to ScyllaDB, refer to :doc:`Compaction Strategies </architecture/compaction/compaction-strategies>`.

Common options
^^^^^^^^^^^^^^
@@ -214,7 +214,7 @@ TWCS options
=====

``expired_sstable_check_frequency_seconds`` (default: 600)
- Specifies (in seconds) how often Scylla will check for fully expired SSTables, which can be immediately dropped.
+ Specifies (in seconds) how often ScyllaDB will check for fully expired SSTables, which can be immediately dropped.

=====

diff --git a/docs/cql/consistency-calculator.rst b/docs/cql/consistency-calculator.rst
--- a/docs/cql/consistency-calculator.rst
+++ b/docs/cql/consistency-calculator.rst
@@ -13,4 +13,4 @@ Additional Information
* :doc:`Fault Tolerance </architecture/architecture-fault-tolerance/>`
* :ref:`Consistency Level Compatibility <consistency-level-read-and-write>`
* :doc:`Consistency Quiz </kb/quiz-administrators/>`
-* Take a course on `Consistency Levels at Scylla University <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/topic/consistency-level/>`_
+* Take a course on `Consistency Levels at ScyllaDB University <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/topic/consistency-level/>`_
diff --git a/docs/cql/consistency.rst b/docs/cql/consistency.rst
--- a/docs/cql/consistency.rst
+++ b/docs/cql/consistency.rst
@@ -118,4 +118,4 @@ Additional Information
* :doc:`Fault Tolerance </architecture/architecture-fault-tolerance/>`
* :ref:`Consistency Level Compatibility <consistency-level-read-and-write>`
* :doc:`Consistency Quiz </kb/quiz-administrators/>`
-* Take a course on `Consistency Levels at Scylla University <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/topic/consistency-level/>`_
+* Take a course on `Consistency Levels at ScyllaDB University <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/topic/consistency-level/>`_
diff --git a/docs/cql/cqlsh.rst b/docs/cql/cqlsh.rst
--- a/docs/cql/cqlsh.rst
+++ b/docs/cql/cqlsh.rst
@@ -173,7 +173,7 @@ SHOW VERSION
This command is useful if you want to check which Cassandra version is compatible with your ScyllaDB version.
Note that the two standards are not 100% identical and this command is simply a comparison tool.

-If you want to display your current ScyllaDB version, refer to :ref:`Check your current version of Scylla <check-your-current-version-of-scylla>`.
+If you want to display your current ScyllaDB version, refer to :ref:`Check your current version of ScyllaDB <check-your-current-version-of-scylla>`.

The display shows:

@@ -539,7 +539,7 @@ Options that are common to both ``COPY TO`` and ``COPY FROM``.

See also:

-CQLSH `lesson <https://university.scylladb.com/courses/data-modeling/lessons/basic-data-modeling-2/topic/cql-cqlsh-and-basic-cql-syntax/>`_ on Scylla University
+CQLSH `lesson <https://university.scylladb.com/courses/data-modeling/lessons/basic-data-modeling-2/topic/cql-cqlsh-and-basic-cql-syntax/>`_ on ScyllaDB University

* :doc:`Apache Cassandra Query Language (CQL) Reference </cql/index>`

diff --git a/docs/cql/ddl.rst b/docs/cql/ddl.rst
--- a/docs/cql/ddl.rst
+++ b/docs/cql/ddl.rst
@@ -121,7 +121,7 @@ name kind mandatory default description

The ``replication`` property is mandatory and must at least contains the ``'class'`` sub-option, which defines the
replication strategy class to use. The rest of the sub-options depend on what replication
-strategy is used. By default, Scylla supports the following ``'class'``:
+strategy is used. By default, ScyllaDB supports the following ``'class'``:

.. _replication-strategy:

@@ -544,7 +544,7 @@ Another useful property of a partition is that when writing data, all the update
done *atomically* and in *isolation*, which is not the case across partitions.

The proper choice of the partition key and clustering columns for a table is probably one of the most important aspects
-of data modeling in Scylla. It largely impacts which queries can be performed and how efficient they are.
+of data modeling in ScyllaDB. It largely impacts which queries can be performed and how efficient they are.

.. note:: An empty string is *not* allowed as a partition key value. In a compound partition key (multiple partition-key columns), any or all of them may be empty strings. Empty string is *not* a Null value.

@@ -555,7 +555,7 @@ The clustering columns
``````````````````````

The clustering columns of a table define the clustering order for the partition of that table. For a given
-:ref:`partition <partition-key>`, all the rows are physically ordered inside Scylla by that clustering order. For
+:ref:`partition <partition-key>`, all the rows are physically ordered inside ScyllaDB by that clustering order. For
instance, given::

CREATE TABLE t (
@@ -709,7 +709,7 @@ A table supports the following options:
Speculative retry options
#########################

-By default, Scylla read coordinators only query as many replicas as necessary to satisfy
+By default, ScyllaDB read coordinators only query as many replicas as necessary to satisfy
consistency levels: one for consistency level ``ONE``, a quorum for ``QUORUM``, and so on.
``speculative_retry`` determines when coordinators may query additional replicas, which is useful
when replicas are slow or unresponsive. The following are legal values (case-insensitive):
@@ -972,7 +972,7 @@ The ``ALTER TABLE`` statement can:

.. warning:: Dropping a column assumes that the timestamps used for the value of this column are "real" timestamp in
microseconds. Using "real" timestamps in microseconds is the default is and is **strongly** recommended, but as
- Scylla allows the client to provide any timestamp on any table, it is theoretically possible to use another
+ ScyllaDB allows the client to provide any timestamp on any table, it is theoretically possible to use another
convention. Please be aware that if you do so, dropping a column will not work correctly.

.. warning:: Once a column is dropped, it is allowed to re-add a column with the same name as the dropped one
diff --git a/docs/cql/dml.rst b/docs/cql/dml.rst
--- a/docs/cql/dml.rst
+++ b/docs/cql/dml.rst
@@ -29,7 +29,7 @@ parameters:
:ref:`INSERT <insert-statement>`, :ref:`UPDATE <update-statement>`, :ref:`DELETE <delete_statement>`, or :ref:`BATCH <batch_statement>`
statements ``USING TIMESTAMP`` should provide a unique timestamp value, similar to the one
implicitly set by the coordinator by default, when the `USING TIMESTAMP` update parameter is absent.
- Scylla ensures that query timestamps created by the same coordinator node are unique (even across different shards
+ ScyllaDB ensures that query timestamps created by the same coordinator node are unique (even across different shards
on the same node). However, timestamps assigned at different nodes are not guaranteed to be globally unique.
Note that with a steadily high write rate, timestamp collision is not unlikely. If it happens, e.g. two INSERTS
have the same timestamp, a conflict resolution algorithm determines which of the inserted cells prevails (see :ref:`update ordering <update-ordering>` for more information):
@@ -38,7 +38,7 @@ parameters:
the columns themselves. This means that any subsequent update of the column will also reset the TTL (to whatever TTL
is specified in that update). By default, values never expire. A TTL of 0 is equivalent to no TTL. If the table has a
default_time_to_live, a TTL of 0 will remove the TTL for the inserted or updated values. A TTL of ``null`` is equivalent
- to inserting with a TTL of 0. You can read more about TTL in the :doc:`documentation </cql/time-to-live>` and also in `this Scylla University lesson <https://university.scylladb.com/courses/data-modeling/lessons/advanced-data-modeling/topic/expiring-data-with-ttl-time-to-live/>`_.
+ to inserting with a TTL of 0. You can read more about TTL in the :doc:`documentation </cql/time-to-live>` and also in `this ScyllaDB University lesson <https://university.scylladb.com/courses/data-modeling/lessons/advanced-data-modeling/topic/expiring-data-with-ttl-time-to-live/>`_.
- ``TIMEOUT``: specifies a timeout duration for the specific request.
Please refer to the :ref:`SELECT <using-timeout>` section for more information.

@@ -76,15 +76,15 @@ reach different results, reading from different replicas would detect the incons
read-repair that will generate yet another cell that would still conflict with the existing cells,
with no guarantee for convergence.

-Therefore, Scylla implements an internal, consistent conflict-resolution algorithm
+Therefore, ScyllaDB implements an internal, consistent conflict-resolution algorithm
that orders cells with conflicting ``TIMESTAMP`` values based on other properties, like:

* whether the cell is a tombstone or a live cell,
* whether the cell has an expiration time,
* the cell ``TTL``,
* and finally, what value the cell carries.

-The conflict-resolution algorithm is documented in `Scylla's internal documentation <https://github.com/scylladb/scylladb/blob/master/docs/dev/timestamp-conflict-resolution.md>`_
+The conflict-resolution algorithm is documented in `ScyllaDB's internal documentation <https://github.com/scylladb/scylladb/blob/master/docs/dev/timestamp-conflict-resolution.md>`_
and it may be subject to change.

Reliable serialization can be achieved using unique write ``TIMESTAMP``
diff --git a/docs/cql/dml/batch.rst b/docs/cql/dml/batch.rst
--- a/docs/cql/dml/batch.rst
+++ b/docs/cql/dml/batch.rst
@@ -41,7 +41,7 @@ Note that:
- ``BATCH`` statements may only contain ``UPDATE``, ``INSERT`` and ``DELETE`` statements (not other batches, for instance).
- Batches are *not* a full analogue for SQL transactions.
- If a timestamp is not specified for each operation, then all operations will be applied with the same timestamp
- (either one generated automatically, or the timestamp provided at the batch level). Due to Scylla's conflict
+ (either one generated automatically, or the timestamp provided at the batch level). Due to ScyllaDB's conflict
resolution procedure in the case of timestamp ties, operations may be applied in an order that is different from the order they are listed in the ``BATCH`` statement. To force a
particular operation ordering, you must specify per-operation timestamps.
- A LOGGED batch to a single partition will be converted to an UNLOGGED batch as an optimization.
@@ -54,18 +54,18 @@ For more information on the :token:`update_parameter` refer to the :ref:`UPDATE
``UNLOGGED`` batches
~~~~~~~~~~~~~~~~~~~~

-By default, Scylla uses a batch log to ensure all operations in a batch eventually complete or none will (note,
+By default, ScyllaDB uses a batch log to ensure all operations in a batch eventually complete or none will (note,
however, that operations are only isolated within a single partition).

There is a performance penalty for batch atomicity when a batch spans multiple partitions. If you do not want to incur
-this penalty, you can tell Scylla to skip the batchlog with the ``UNLOGGED`` option. If the ``UNLOGGED`` option is
+this penalty, you can tell ScyllaDB to skip the batchlog with the ``UNLOGGED`` option. If the ``UNLOGGED`` option is
used, a failed batch might leave the batch only partly applied.

``COUNTER`` batches
~~~~~~~~~~~~~~~~~~~

Use the ``COUNTER`` option for batched counter updates. Unlike other
-updates in Scylla, counter updates are not idempotent.
+updates in ScyllaDB, counter updates are not idempotent.


:doc:`Apache Cassandra Query Language (CQL) Reference </cql/index>`
diff --git a/docs/cql/dml/insert.rst b/docs/cql/dml/insert.rst
--- a/docs/cql/dml/insert.rst
+++ b/docs/cql/dml/insert.rst
@@ -43,9 +43,9 @@ of eventual consistency on an event of a timestamp collision:
nodes proceed without coordination. Eventually cell values
supplied by a statement with the highest timestamp will prevail (see :ref:`update ordering <update-ordering>`).

-Unless a timestamp is provided by the client, Scylla will automatically
+Unless a timestamp is provided by the client, ScyllaDB will automatically
generate a timestamp with microsecond precision for each
-column assigned by ``INSERT``. Scylla ensures timestamps created
+column assigned by ``INSERT``. ScyllaDB ensures timestamps created
by the same node are unique. Timestamps assigned at different
nodes are not guaranteed to be globally unique.
With a steadily high write rate timestamp collision
diff --git a/docs/cql/dml/select.rst b/docs/cql/dml/select.rst
--- a/docs/cql/dml/select.rst
+++ b/docs/cql/dml/select.rst
@@ -116,7 +116,7 @@ You can read more about the ``TIMESTAMP`` retrieved by ``WRITETIME`` in the :ref

- ``TTL`` retrieves the remaining time to live (in *seconds*) for the value of the column, if it set to expire, or ``null`` otherwise.

-You can read more about TTL in the :doc:`documentation </cql/time-to-live>` and also in `this Scylla University lesson <https://university.scylladb.com/courses/data-modeling/lessons/advanced-data-modeling/topic/expiring-data-with-ttl-time-to-live/>`_.
+You can read more about TTL in the :doc:`documentation </cql/time-to-live>` and also in `this ScyllaDB University lesson <https://university.scylladb.com/courses/data-modeling/lessons/advanced-data-modeling/topic/expiring-data-with-ttl-time-to-live/>`_.

.. _where-clause:

@@ -240,7 +240,7 @@ Limiting results
~~~~~~~~~~~~~~~~

The ``LIMIT`` option to a ``SELECT`` statement limits the number of rows returned by a query, while the ``PER PARTITION
-LIMIT`` option (introduced in Scylla 3.1) limits the number of rows returned for a given **partition** by the query. Note that both types of limit can be
+LIMIT`` option (introduced in ScyllaDB 3.1) limits the number of rows returned for a given **partition** by the query. Note that both types of limit can be
used in the same statement.

Examples:
@@ -359,7 +359,7 @@ Then the following queries are valid::
SELECT * FROM users;
SELECT * FROM users WHERE birth_year = 1981;

-because in both cases, Scylla guarantees that these queries' performance will be proportional to the amount of data
+because in both cases, ScyllaDB guarantees that these queries' performance will be proportional to the amount of data
returned. In particular, if no users were born in 1981, then the second query performance will not depend on the number
of user profiles stored in the database (not directly at least: due to secondary index implementation consideration, this
query may still depend on the number of nodes in the cluster, which indirectly depends on the amount of data stored.
@@ -371,7 +371,7 @@ However, the following query will be rejected::

SELECT * FROM users WHERE birth_year = 1981 AND country = 'FR';

-because Scylla cannot guarantee that it won't have to scan a large amount of data even if the result of those queries is
+because ScyllaDB cannot guarantee that it won't have to scan a large amount of data even if the result of those queries is
small. Typically, it will scan all the index entries for users born in 1981 even if only a handful are actually from
France. However, if you “know what you are doing”, you can force the execution of this query by using ``ALLOW
FILTERING`` and so the following query is valid::
@@ -409,7 +409,7 @@ Bypass Cache
The ``BYPASS CACHE`` clause on SELECT statements informs the database that the data being read is unlikely to be read again in the near future, and also was unlikely to have been read in the near past; therefore, no attempt should be made to read it from the cache or to populate the cache with the data. This is mostly useful for range scans; these typically process large amounts of data with no temporal locality and do not benefit from the cache.
The clause is placed immediately after the optional ALLOW FILTERING clause.

-``BYPASS CACHE`` is a Scylla CQL extension and not part of Apache Cassandra CQL.
+``BYPASS CACHE`` is a ScyllaDB CQL extension and not part of Apache Cassandra CQL.

For example::

@@ -429,14 +429,14 @@ For example::
SELECT * FROM users USING TIMEOUT 5s;
SELECT name, occupation FROM users WHERE userid IN (199, 200, 207) BYPASS CACHE USING TIMEOUT 200ms;

-``USING TIMEOUT`` is a Scylla CQL extension and not part of Apache Cassandra CQL.
+``USING TIMEOUT`` is a ScyllaDB CQL extension and not part of Apache Cassandra CQL.

.. _like-operator:

LIKE Operator
~~~~~~~~~~~~~

-The ``LIKE`` operation on ``SELECT`` statements informs Scylla that you are looking for a pattern match. The expression ‘column LIKE pattern’ yields true only if the entire column value matches the pattern.
+The ``LIKE`` operation on ``SELECT`` statements informs ScyllaDB that you are looking for a pattern match. The expression ‘column LIKE pattern’ yields true only if the entire column value matches the pattern.

The search pattern is a string of characters with two wildcards, as shown:

@@ -454,15 +454,15 @@ For example, consider the search pattern 'M%n' - this will match ``Martin``, but

A query can find all values containing some text fragment by matching to an appropriate ``LIKE`` pattern.

-**Differences Between Scylla and Cassandra LIKE Operators**
+**Differences Between ScyllaDB and Cassandra LIKE Operators**

-* In Apache Cassandra, you must create a SASI index to use LIKE. Scylla supports LIKE as a regular filter.
-* Consequently, Scylla LIKE will be less performant than Apache Cassandra LIKE for some workloads.
-* Scylla treats underscore (_) as a wildcard; Cassandra doesn't.
-* Scylla treats percent (%) as a wildcard anywhere in the pattern; Cassandra only at the beginning/end
-* Scylla interprets backslash (\\) as an escape character; Cassandra doesn't.
-* Cassandra allows case-insensitive LIKE; Scylla doesn't (see `#4911 <https://github.com/scylladb/scylla/issues/4911>`_).
-* Scylla allows empty LIKE pattern; Cassandra doesn't.
+* In Apache Cassandra, you must create a SASI index to use LIKE. ScyllaDB supports LIKE as a regular filter.
+* Consequently, ScyllaDB LIKE will be less performant than Apache Cassandra LIKE for some workloads.
+* ScyllaDB treats underscore (_) as a wildcard; Cassandra doesn't.
+* ScyllaDB treats percent (%) as a wildcard anywhere in the pattern; Cassandra only at the beginning/end
+* ScyllaDB interprets backslash (\\) as an escape character; Cassandra doesn't.
+* Cassandra allows case-insensitive LIKE; ScyllaDB doesn't (see `#4911 <https://github.com/scylladb/scylla/issues/4911>`_).
+* ScyllaDB allows empty LIKE pattern; Cassandra doesn't.

**Example A**

diff --git a/docs/cql/json.rst b/docs/cql/json.rst
--- a/docs/cql/json.rst
+++ b/docs/cql/json.rst
@@ -23,7 +23,7 @@
JSON Support
------------

-Scylla introduces JSON support to :ref:`SELECT <select-statement>` and :ref:`INSERT <insert-statement>`
+ScyllaDB introduces JSON support to :ref:`SELECT <select-statement>` and :ref:`INSERT <insert-statement>`
statements. This support does not fundamentally alter the CQL API (for example, the schema is still enforced). It simply
provides a convenient way to work with JSON documents.

@@ -57,17 +57,17 @@ Alternatively, if the ``DEFAULT UNSET`` directive is used after the value, omitt
meaning that pre-existing values for those columns will be preserved.


-JSON Encoding of Scylla Data Types
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+JSON Encoding of ScyllaDB Data Types
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-Where possible, Scylla will represent and accept data types in their native ``JSON`` representation. Scylla will
+Where possible, ScyllaDB will represent and accept data types in their native ``JSON`` representation. ScyllaDB will
also accept string representations matching the CQL literal format for all single-field types. For example, floats,
ints, UUIDs, and dates can be represented by CQL literal strings. However, compound types, such as collections, tuples,
and user-defined types, must be represented by native ``JSON`` collections (maps and lists) or a JSON-encoded string
representation of the collection.

-The following table describes the encodings that Scylla will accept in ``INSERT JSON`` values (and ``fromJson()``
-arguments) as well as the format Scylla will use when returning data for ``SELECT JSON`` statements (and
+The following table describes the encodings that ScyllaDB will accept in ``INSERT JSON`` values (and ``fromJson()``
+arguments) as well as the format ScyllaDB will use when returning data for ``SELECT JSON`` statements (and
``fromJson()``):

=============== ======================== =============== ==============================================================
diff --git a/docs/cql/secondary-indexes.rst b/docs/cql/secondary-indexes.rst
--- a/docs/cql/secondary-indexes.rst
+++ b/docs/cql/secondary-indexes.rst
@@ -59,7 +59,7 @@ automatically at insertion time.
Local Secondary Index
^^^^^^^^^^^^^^^^^^^^^

-:doc:`Local Secondary Indexes </using-scylla/local-secondary-indexes>` is an enhancement of :doc:`Global Secondary Indexes </using-scylla/secondary-indexes>`, which allows Scylla to optimize the use case in which the partition key of the base table is also the partition key of the index. Local Secondary Index syntax is the same as above, with extra parentheses on the partition key.
+:doc:`Local Secondary Indexes </using-scylla/local-secondary-indexes>` is an enhancement of :doc:`Global Secondary Indexes </using-scylla/secondary-indexes>`, which allows ScyllaDB to optimize the use case in which the partition key of the base table is also the partition key of the index. Local Secondary Index syntax is the same as above, with extra parentheses on the partition key.

.. code-block::

@@ -78,7 +78,7 @@ More on :doc:`Local Secondary Indexes </using-scylla/local-secondary-indexes>`
.. Attempting to create an already existing index will return an error unless the ``IF NOT EXISTS`` option is used. If it
.. is used, the statement will be a no-op if the index already exists.

-.. Indexes on Map Keys (supported in Scylla 2.2)
+.. Indexes on Map Keys (supported in ScyllaDB 2.2)
.. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. When creating an index on a :ref:`maps <maps>`, you may index either the keys or the values. If the column identifier is
@@ -108,7 +108,7 @@ Additional Information
* :doc:`Global Secondary Indexes </using-scylla/secondary-indexes/>`
* :doc:`Local Secondary Indexes </using-scylla/local-secondary-indexes/>`

-The following courses are available from Scylla University:
+The following courses are available from ScyllaDB University:

* `Materialized Views and Secondary Indexes <https://university.scylladb.com/courses/data-modeling/lessons/materialized-views-secondary-indexes-and-filtering/>`_
* `Global Secondary Indexes <https://university.scylladb.com/courses/data-modeling/lessons/materialized-views-secondary-indexes-and-filtering/topic/global-secondary-indexes/>`_
diff --git a/docs/cql/time-to-live.rst b/docs/cql/time-to-live.rst
--- a/docs/cql/time-to-live.rst
+++ b/docs/cql/time-to-live.rst
@@ -7,7 +7,7 @@
Expiring Data with Time to Live (TTL)
-------------------------------------

-Scylla (as well as Apache Cassandra) provides the functionality to automatically delete expired data according to the Time to Live (or TTL) value.
+ScyllaDB (as well as Apache Cassandra) provides the functionality to automatically delete expired data according to the Time to Live (or TTL) value.
TTL is measured in seconds. If the field is not updated within the TTL it is deleted.
The TTL can be set when defining a Table (CREATE), or when using the INSERT and UPDATE queries.
The expiration works at the individual column level, which provides a lot of flexibility.
@@ -91,7 +91,7 @@ Notes
* Notice that setting the TTL on a column using UPDATE or INSERT overrides the default_time_to_live set at the Table level.
* The TTL is determined by the coordinator node. When using TTL, make sure that all the nodes in the cluster have synchronized clocks.
* When using TTL for a table, consider using the TWCS compaction strategy.
-* Scylla defines TTL on a per column basis, for non-primary key columns. It’s impossible to set the TTL for the entire row after an initial insert; instead, you can reinsert the row (which is actually an upsert).
+* ScyllaDB defines TTL on a per column basis, for non-primary key columns. It’s impossible to set the TTL for the entire row after an initial insert; instead, you can reinsert the row (which is actually an upsert).
* TTL can not be defined for counter columns.
* To remove the TTL, set it to 0.

@@ -100,7 +100,7 @@ Notes
Additional Information
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-To learn more about TTL, and see a hands-on example, check out `this lesson <https://university.scylladb.com/courses/data-modeling/lessons/advanced-data-modeling/topic/expiring-data-with-ttl-time-to-live/>`_ on Scylla University.
+To learn more about TTL, and see a hands-on example, check out `this lesson <https://university.scylladb.com/courses/data-modeling/lessons/advanced-data-modeling/topic/expiring-data-with-ttl-time-to-live/>`_ on ScyllaDB University.

* :doc:`Apache Cassandra Query Language (CQL) Reference </cql/index>`
* :doc:`KB Article:How to Change gc_grace_seconds for a Table </kb/gc-grace-seconds/>`
diff --git a/docs/cql/types.rst b/docs/cql/types.rst
--- a/docs/cql/types.rst
+++ b/docs/cql/types.rst
@@ -146,7 +146,7 @@ valid ``timestamp`` values for Mar 2, 2011, at 04:05:00 AM, GMT:

The ``+0000`` above is an RFC 822 4-digit time zone specification; ``+0000`` refers to GMT. US Pacific Standard Time is
``-0800``. The time zone may be omitted if desired (``'2011-02-03 04:05:00'``), and if so, the date will be interpreted
-as being in the time zone under which the coordinating Scylla node is configured. However, there are difficulties
+as being in the time zone under which the coordinating ScyllaDB node is configured. However, there are difficulties
inherent in relying on the time zone configuration as expected, so it is recommended that the time zone always be
specified for timestamps when feasible.

@@ -317,7 +317,7 @@ old content, if any::

UPDATE users SET favs = { 'fruit' : 'Banana' } WHERE id = 'jsmith';

-Note that Scylla does not distinguish an empty map from a missing value,
+Note that ScyllaDB does not distinguish an empty map from a missing value,
thus assigning an empty map (``{}``) to a map is the same as deleting it.

Further, maps support:
@@ -363,7 +363,7 @@ old content, if any::

UPDATE images SET tags = { 'kitten', 'cat', 'lol' } WHERE name = 'cat.jpg';

-Note that Scylla does not distinguish an empty set from a missing value,
+Note that ScyllaDB does not distinguish an empty set from a missing value,
thus assigning an empty set (``{}``) to a set is the same as deleting it.

Further, sets support:
@@ -406,7 +406,7 @@ old content, if any::

UPDATE plays SET scores = [ 3, 9, 4] WHERE id = '123-afde';

-Note that Scylla does not distinguish an empty list from a missing value,
+Note that ScyllaDB does not distinguish an empty list from a missing value,
thus assigning an empty list (``[]``) to a list is the same as deleting it.

Further, lists support:
@@ -469,7 +469,7 @@ Creating a new user-defined type is done using a ``CREATE TYPE`` statement defin
field_definition: `identifier` `cql_type`

A UDT has a name (``udt_name``), which is used to declare columns of that type and is a set of named and typed fields. The ``udt_name`` can be any
-type, including collections or other UDTs. UDTs and collections inside collections must always be frozen (no matter which version of Scylla you are using).
+type, including collections or other UDTs. UDTs and collections inside collections must always be frozen (no matter which version of ScyllaDB you are using).

For example::

@@ -501,10 +501,10 @@ For example::

- Attempting to create an already existing type will result in an error unless the ``IF NOT EXISTS`` option is used. If it is used, the statement will be a no-op if the type already exists.
- A type is intrinsically bound to the keyspace in which it is created and can only be used in that keyspace. At creation, if the type name is prefixed by a keyspace name, it is created in that keyspace. Otherwise, it is created in the current keyspace.
- - As of Scylla Open Source 3.2, UDTs not inside collections do not have to be frozen, but in all versions prior to Scylla Open Source 3.2, and in all Scylla Enterprise versions, UDTs **must** be frozen.
+ - As of ScyllaDB Open Source 3.2, UDTs not inside collections do not have to be frozen, but in all versions prior to ScyllaDB Open Source 3.2, and in all ScyllaDB Enterprise versions, UDTs **must** be frozen.


-A non-frozen UDT example with Scylla Open Source 3.2 and higher::
+A non-frozen UDT example with ScyllaDB Open Source 3.2 and higher::

CREATE TYPE ut (a int, b int);
CREATE TABLE cf (a int primary key, b ut);
diff --git a/docs/cql/wasm.rst b/docs/cql/wasm.rst
--- a/docs/cql/wasm.rst
+++ b/docs/cql/wasm.rst
@@ -10,7 +10,7 @@ This document describes the details of Wasm language support in user-defined fun
How to generate a correct Wasm UDF source code
----------------------------------------------

-Scylla accepts UDF's source code in WebAssembly Text ("WAT") format. The source can use and define whatever's needed for execution, including multiple helper functions and symbols. The requirements for it to be accepted as correct UDF source are that the WebAssembly module export a symbol with the same name as the function, this symbol's type should be indeed a function with correct signature, and the module export a ``_scylla_abi`` global and all symbols related to the selected :ref:`ABI version <abi-versions>`.
+ScyllaDB accepts UDF's source code in WebAssembly Text ("WAT") format. The source can use and define whatever's needed for execution, including multiple helper functions and symbols. The requirements for it to be accepted as correct UDF source are that the WebAssembly module export a symbol with the same name as the function, this symbol's type should be indeed a function with correct signature, and the module export a ``_scylla_abi`` global and all symbols related to the selected :ref:`ABI version <abi-versions>`.

UDF's source code can be, naturally, simply coded by hand in WAT. It is not often very convenient to program directly in assembly, so here are a few tips.

diff --git a/docs/faq.rst b/docs/faq.rst
--- a/docs/faq.rst
+++ b/docs/faq.rst
@@ -5,33 +5,33 @@ ScyllaDB FAQ
.. meta::
:title:
:description: Frequently Asked Questions about ScyllaDB
- :keywords: questions, Scylla, ScyllaDB, DBaaS, FAQ, error, problem
+ :keywords: questions, ScyllaDB, ScyllaDB, DBaaS, FAQ, error, problem

Performance
-----------

-Scylla is using all of my memory! Why is that? What if the server runs out of memory?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Scylla uses available memory to cache your data. Scylla knows how to dynamically manage memory for optimal performance; for example, if many clients connect to Scylla, it will evict some data from the cache to make room for these connections; when the connection count drops again, this memory is returned to the cache.
+ScyllaDB is using all of my memory! Why is that? What if the server runs out of memory?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ScyllaDB uses available memory to cache your data. ScyllaDB knows how to dynamically manage memory for optimal performance; for example, if many clients connect to ScyllaDB, it will evict some data from the cache to make room for these connections; when the connection count drops again, this memory is returned to the cache.

-Can I limit Scylla to use less CPU and memory?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-The :code:`--smp` option (for instance, :code:`--smp 2`) will restrict Scylla to a smaller number of CPUs. It will still use 100 % of those CPUs, but at least won’t take your system out completely. An analogous option exists for memory: :code:`-m`.
+Can I limit ScyllaDB to use less CPU and memory?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The :code:`--smp` option (for instance, :code:`--smp 2`) will restrict ScyllaDB to a smaller number of CPUs. It will still use 100 % of those CPUs, but at least won’t take your system out completely. An analogous option exists for memory: :code:`-m`.

-What are some of the techniques Scylla uses to achieve its performance?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Scylla tries to utilize all available resources (processor cores, memory, storage, and networking) by always operating in parallel and never blocking. If Scylla needs to read a disk block, it initiates the read and immediately moves on to another task. Later, when the read completes Scylla resumes the original task from where it left off. By never blocking, a high degree of concurrency is achieved, allowing all resources to be utilized to their limit.
-Read more on Scylla Architecture:
+What are some of the techniques ScyllaDB uses to achieve its performance?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ScyllaDB tries to utilize all available resources (processor cores, memory, storage, and networking) by always operating in parallel and never blocking. If ScyllaDB needs to read a disk block, it initiates the read and immediately moves on to another task. Later, when the read completes ScyllaDB resumes the original task from where it left off. By never blocking, a high degree of concurrency is achieved, allowing all resources to be utilized to their limit.
+Read more on ScyllaDB Architecture:

-* `Scylla Technology <http://www.scylladb.com/product/technology/>`_
-* `Scylla Memory Management <http://www.scylladb.com/product/technology/memory-management/>`_
+* `ScyllaDB Technology <http://www.scylladb.com/product/technology/>`_
+* `ScyllaDB Memory Management <http://www.scylladb.com/product/technology/memory-management/>`_

-I thought that Scylla's underlying `Seastar framework <https://github.com/scylladb/seastar>`_ uses one thread per core, but I see more than two threads per core. Why?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+I thought that ScyllaDB's underlying `Seastar framework <https://github.com/scylladb/seastar>`_ uses one thread per core, but I see more than two threads per core. Why?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Seastar creates an extra thread per core for blocking syscalls (like :code:`open()`/ :code:`fsync()` / :code:`close()` ); this allows the Seastar reactor to continue executing while a blocking operation takes place. Those threads are usually idle, so they don’t contribute to significant context switching activity.

-I’m seeing X compaction running in parallel on a single Scylla node. Is it normal?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+I’m seeing X compaction running in parallel on a single ScyllaDB node. Is it normal?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Yes, for more than one reason:

* each shard (core) will run its compactions independently, often at the same time,
@@ -42,22 +42,22 @@ Yes, for more than one reason:

Setting io.conf configuration for HDD storage
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-As part of the Scylla setup process, **iotune** runs a short benchmark of your storage. When completed, it generates the `/etc/scylla.d/io.conf` configuration file. Note that iotune has known issues benchmarking HDD storage.
+As part of the ScyllaDB setup process, **iotune** runs a short benchmark of your storage. When completed, it generates the `/etc/scylla.d/io.conf` configuration file. Note that iotune has known issues benchmarking HDD storage.

.. note:: This section is not relevant in 2.3

-Therefore, when using Scylla with HDD storage, it is recommended to use RAID0 on all of your available disks, and manually update the `io.conf` configuration file `max-io-request` parameter. This parameter sets the number of concurrent requests sent to the storage. The value for this parameter should be 3X (3 times) the number of your disks. For example, if you have 3 disks, you would set `max-io-request=9`.
+Therefore, when using ScyllaDB with HDD storage, it is recommended to use RAID0 on all of your available disks, and manually update the `io.conf` configuration file `max-io-request` parameter. This parameter sets the number of concurrent requests sent to the storage. The value for this parameter should be 3X (3 times) the number of your disks. For example, if you have 3 disks, you would set `max-io-request=9`.

-How many connections is it recommended to open from each Scylla client application?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+How many connections is it recommended to open from each ScyllaDB client application?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-As a rule of thumb, for Scylla's best performance, each client needs at least 1-3 connection per Scylla core.
-For example, a cluster with three nodes, each node with 16 cores, each client application should open 32 (2x16) connections to each Scylla node.
+As a rule of thumb, for ScyllaDB's best performance, each client needs at least 1-3 connection per ScyllaDB core.
+For example, a cluster with three nodes, each node with 16 cores, each client application should open 32 (2x16) connections to each ScyllaDB node.

-Do I need to configure ``swap`` on a Scylla node?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Do I need to configure ``swap`` on a ScyllaDB node?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-Yes, configuring ``swap`` on a Scylla node is recommended.
+Yes, configuring ``swap`` on a ScyllaDB node is recommended.
``swap`` size should be set to either ``total_mem``/3 or 16GB - lower of the two.

``total_mem`` is the total size of the nodes memory.
@@ -91,9 +91,9 @@ Disk Space

.. _reclaim-space:

-Dropping a table does not reduce storage used by Scylla, how can I clean the disk from dropped tables?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-scylla.yaml includes an ``auto_snapshot`` parameter; when true (it is by default), Scylla creates a snapshot for a table just before dropping it, as a safety measure.
+Dropping a table does not reduce storage used by ScyllaDB, how can I clean the disk from dropped tables?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+scylla.yaml includes an ``auto_snapshot`` parameter; when true (it is by default), ScyllaDB creates a snapshot for a table just before dropping it, as a safety measure.
You can find the snapshot in the ``snapshots`` directory, under the table SSTable. For example, for dropped table ``users`` in keyspace ``mykeyspace``:

:code:`/var/lib/scylla/data/mykeyspace/users-bdba4e60f6d511e7a2ab000000000000/snapshots/1515678531438-users`
@@ -121,14 +121,14 @@ You need to add the line :code:`experimental: true` to your :code:`scylla.yaml`

:code:`$ docker stop <your_node> && docker start <your_node>`

- Alternately, starting from Scylla 2.0, you can start Scylla for Docker with the :code:`experimental` flag as follows:
+ Alternately, starting from ScyllaDB 2.0, you can start ScyllaDB for Docker with the :code:`experimental` flag as follows:

:code:`$ docker run --name <your_node> -d scylladb/scylla --experimental 1`

-You should now be able to use the experimental features available in your version of Scylla.
+You should now be able to use the experimental features available in your version of ScyllaDB.

-How do I check the current version of Scylla that I am running?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+How do I check the current version of ScyllaDB that I am running?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* On a regular system or VM (running Ubuntu, CentOS, or RedHat Enterprise): :code:`$ scylla --version`

Check the :doc:`Operating System Support Guide </getting-started/os-support>` for a list of supported operating systems and versions.
@@ -138,8 +138,8 @@ Check the :doc:`Operating System Support Guide </getting-started/os-support>` fo
I am upgrading my nodes to a version that uses a newer SSTable format, when will the nodes start using the new SSTable format?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-The :doc:`new "mc" SSTable format</architecture/sstable/sstable3/index>` is supported in Scylla 3.0 and later.
-Scylla only starts using the newer format when every node in the cluster is capable to generate it.
+The :doc:`new "mc" SSTable format</architecture/sstable/sstable3/index>` is supported in ScyllaDB 3.0 and later.
+ScyllaDB only starts using the newer format when every node in the cluster is capable to generate it.
Therefore, only when all nodes in the cluster are upgraded the new format is used.

Docker
@@ -155,10 +155,10 @@ See `Error connecting Java Spring application to ScyllaDB Cluster in Docker <htt


Installation
-------------
-Can I install Scylla on an Apache Cassandra server?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Scylla comes with its own version of the Apache Cassandra client tools, in the package :code:`scylla-tools`. Trying to install it on a server with Cassandra already installed may result in something like:
+-----------------------------------------------------
+Can I install ScyllaDB on an Apache Cassandra server?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ScyllaDB comes with its own version of the Apache Cassandra client tools, in the package :code:`scylla-tools`. Trying to install it on a server with Cassandra already installed may result in something like:

.. code-block:: console

@@ -267,12 +267,12 @@ Yes, but it will require running a full repair (or cleanup) to change the replic
Why can't I set ``listen_address`` to listen to 0.0.0.0 (all my addresses)?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-Scylla is a gossip-based distributed system and ``listen_address`` is the address a node tells other nodes to reach
+ScyllaDB is a gossip-based distributed system and ``listen_address`` is the address a node tells other nodes to reach
it at. Telling other nodes "contact me on any of my addresses" is a bad idea; if different nodes in the cluster pick
different addresses for you, Bad Things happen.

If you don't want to manually specify an IP to ``listen_address`` for each node in your cluster (understandable!), leave
-it blank and Scylla will use ``InetAddress.getLocalHost()`` to pick an address. Then it's up to you or your ops team
+it blank and ScyllaDB will use ``InetAddress.getLocalHost()`` to pick an address. Then it's up to you or your ops team
to make things resolve correctly (``/etc/hosts/``, dns, etc).

.. _faq-best-scenario-node-multi-availability-zone:
@@ -336,10 +336,10 @@ Where can I ask a question not covered here?
* `scylladb-dev <https://groups.google.com/d/forum/scylladb-dev>`_: Discuss the development of ScyllaDB itself.


-I deleted data from Scylla, but disk usage stays the same. Why?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+I deleted data from ScyllaDB, but disk usage stays the same. Why?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-Data you write to Scylla gets persisted to SSTables. Since SSTables are immutable, the data can't actually be removed
+Data you write to ScyllaDB gets persisted to SSTables. Since SSTables are immutable, the data can't actually be removed
when you perform a delete, instead, a marker (also called a "tombstone") is written to indicate the value's new status.
Never fear though, on the first compaction that occurs between the data and the tombstone, the data will be expunged
completely and the corresponding disk space recovered.
@@ -350,40 +350,40 @@ What are seeds?
Seeds are used during startup to discover the cluster. They are referred by new nodes on bootstrap to learn about other nodes in the ring. When you add a new node to the cluster, you
must specify one live seed to contact.

-In ScyllaDB versions earlier than Scylla Open Source 4.3 and Scylla Enterprise 2021.1, a seed node has an additional
-function: it assists with gossip convergence. See :doc:`Scylla Seed Nodes </kb/seed-nodes/>` for details.
+In ScyllaDB versions earlier than ScyllaDB Open Source 4.3 and ScyllaDB Enterprise 2021.1, a seed node has an additional
+function: it assists with gossip convergence. See :doc:`ScyllaDB Seed Nodes </kb/seed-nodes/>` for details.

We recommend updating your ScyllaDB to version 4.3 or later (Open Source) or 2021.1 or later (Enterprise).

.. _faq-raid0-required:

-Is RAID0 required for Scylla? Why?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Is RAID0 required for ScyllaDB? Why?
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-No, it is not required, but it is highly recommended when using Scylla with more than one drive. Scylla requires one drive for its data file and one drive for commit log (can be the same). If you want to take advantage of more than one drive, the easiest way to do so is set RAID0 (striped) across all of them. If you choose, scylla_setup will setup RAID0 for you on your selected drive, as well as XFS file system (recommended).
-Similarly, Scylla AMI on EC2 will automatically mount all available SSD drives in RAID0.
+No, it is not required, but it is highly recommended when using ScyllaDB with more than one drive. ScyllaDB requires one drive for its data file and one drive for commit log (can be the same). If you want to take advantage of more than one drive, the easiest way to do so is set RAID0 (striped) across all of them. If you choose, scylla_setup will setup RAID0 for you on your selected drive, as well as XFS file system (recommended).
+Similarly, ScyllaDB AMI on EC2 will automatically mount all available SSD drives in RAID0.

Should I use RAID for replications, such as RAID1, RAID4 or higher?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-You can, but it is not recommended. Scylla :doc:`clustering architecture </architecture/ringarchitecture/index/>` already provides data replication across nodes and DCs.
+You can, but it is not recommended. ScyllaDB :doc:`clustering architecture </architecture/ringarchitecture/index/>` already provides data replication across nodes and DCs.
Adding another layer of replication in each node is redundant, slows down I/O operation and reduces available storage.
Want a higher level of replication?
Increase the Replication Factor (RF) of :doc:`relevant Keyspaces </cql/ddl/>`.

Can I use JBOD and not use RAID0?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-:term:`JBOD` is not supported by Scylla.
+:term:`JBOD` is not supported by ScyllaDB.

-:abbr:`JBOD (Just a Bunch Of Disks)` may be a reasonable solution for Cassandra because it rebuilds nodes very slowly. As this is not an issue for Scylla, it's more efficient to use RAID.
+:abbr:`JBOD (Just a Bunch Of Disks)` may be a reasonable solution for Cassandra because it rebuilds nodes very slowly. As this is not an issue for ScyllaDB, it's more efficient to use RAID.

Explanation: There are two types of deployment when multiple disks exist. In the JBOD case, each disk is an isolated filesystem. I/O isn't stripped and thus performance can be slower than that of RAID. In addition, as the free space isn't shared, a single disk can be full while the others are available.

The benefit of JBOD vs RAID is that it isolates failures to individual disk and not the entire node.
-However, Scylla rebuilds nodes quickly and thus it is not an issue when rebuilding an entire node.
+However, ScyllaDB rebuilds nodes quickly and thus it is not an issue when rebuilding an entire node.

-As a result, it is much more advantageous to use RAID with Scylla
+As a result, it is much more advantageous to use RAID with ScyllaDB


Is ``Nodetool Repair`` a Local (One Node) Operation or a Global (Full Cluster) Operation?
@@ -409,7 +409,7 @@ You can restrict the number of items in the IN clause with the following options
We recommend that you use these options with caution. Changing the maximum number of IN restrictions to more than 100 may result in server instability.

The options can be configured on the command line, passed with ``SCYLLA_ARGS`` in ``/etc/default/scylla-server`` or ``/etc/sysconfig/scylla-server``,
-or added to your ``scylla.yaml`` (see :doc:`Scylla Configuration<operating-scylla/admin>`).
+or added to your ``scylla.yaml`` (see :doc:`ScyllaDB Configuration<operating-scylla/admin>`).

Can I change the coredump mount point?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -418,7 +418,7 @@ Yes, by edit ``sysctl.d``.

Procedure

-1. Create ``/etc/sysctl.d/99-scylla-coredump.conf`` (this file exists by default in Scylla AMI).
+1. Create ``/etc/sysctl.d/99-scylla-coredump.conf`` (this file exists by default in ScyllaDB AMI).

2. Open the ``99-scylla-coredump.conf`` file.

diff --git a/docs/getting-started/_common/system-configuration-index.rst b/docs/getting-started/_common/system-configuration-index.rst
--- a/docs/getting-started/_common/system-configuration-index.rst
+++ b/docs/getting-started/_common/system-configuration-index.rst
@@ -1,15 +1,15 @@
-Configure Scylla
-================
+Configure ScyllaDB
+==================

-System configuration steps are performed automatically by the Scylla RPM and deb packages. For information on getting started with Scylla, see :doc:`Getting Started </getting-started/index>`.
+System configuration steps are performed automatically by the ScyllaDB RPM and deb packages. For information on getting started with ScyllaDB, see :doc:`Getting Started </getting-started/index>`.

-All Scylla AMIs and Docker images are pre-configured by a script with the following steps. This document is provided as a reference.
+All ScyllaDB AMIs and Docker images are pre-configured by a script with the following steps. This document is provided as a reference.

.. _system-configuration-files-and-scripts:

System Configuration Files and Scripts
--------------------------------------
-Several system configuration settings should be applied. For ease of use, the necessary scripts and configuration files are provided. Files are under :code:`dist/common` and :code:`seastar/scripts` in the Scylla source code and installed in the appropriate system locations. (For information on Scylla’s own configuration file, see Scylla Configuration.)
+Several system configuration settings should be applied. For ease of use, the necessary scripts and configuration files are provided. Files are under :code:`dist/common` and :code:`seastar/scripts` in the ScyllaDB source code and installed in the appropriate system locations. (For information on ScyllaDB’s own configuration file, see ScyllaDB Configuration.)

.. list-table:: System Configuration Files
:widths: 50 50
@@ -26,34 +26,34 @@ Several system configuration settings should be applied. For ease of use, the ne

.. _system-configuration-scripts:

-Scylla Scripts
---------------
+ScyllaDB Scripts
+----------------

-The following scripts are available for you to run for configuring Scylla. Some of these scripts are included in the scylla_setup script. This script is used for configuring Scylla the first time, or when the system hardware changes.
+The following scripts are available for you to run for configuring ScyllaDB. Some of these scripts are included in the scylla_setup script. This script is used for configuring ScyllaDB the first time, or when the system hardware changes.


-.. list-table:: Scylla Setup Scripts
+.. list-table:: ScyllaDB Setup Scripts
:widths: 40 60
:header-rows: 1

* - perftune.py
- Configures various system parameters in order to improve the Seastar application performance
* - scylla_bootparam_setup
- - Sets the kernel options in the bootloader. In addition, it tunes Linux boot-time parameters for the node that Scylla is running on (e.g. huge page setup).
+ - Sets the kernel options in the bootloader. In addition, it tunes Linux boot-time parameters for the node that ScyllaDB is running on (e.g. huge page setup).
* - scylla_coredump_setup
- - Sets up coredump facilities for Scylla. This may include uninstalling existing crash reporting software for compatibility reasons.
+ - Sets up coredump facilities for ScyllaDB. This may include uninstalling existing crash reporting software for compatibility reasons.
* - scylla_io_setup
- Benchmarks the disks and generates the io.conf and io_properties.yaml files.
* - scylla_ntp_setup
- Configures Network Time Protocol
* - scylla_prepare
- - This script is run automatically every time Scylla starts and the machine needs to be tuned.
+ - This script is run automatically every time ScyllaDB starts and the machine needs to be tuned.
* - scylla_raid_setup
- Configures RAID and makes an XFS filesystem.
* - scylla_save_coredump
- Compresses a core dump file (Ubuntu only)
* - scylla_setup
- - Sets up the Scylla configuration. Many of these scripts are included in the setup script.
+ - Sets up the ScyllaDB configuration. Many of these scripts are included in the setup script.
* - scylla_stop
- Resets network mode if running in virtio or DPDK mode.
* - scylla_swap_setup
@@ -62,7 +62,7 @@ The following scripts are available for you to run for configuring Scylla. Some
- Rewrites the /etc/sysconfig/scylla file.


-.. list-table:: Scylla Scripts (Not included with Scylla-Setup)
+.. list-table:: ScyllaDB Scripts (Not included with ScyllaDB-Setup)
:widths: 40 60
:header-rows: 1

@@ -71,29 +71,29 @@ The following scripts are available for you to run for configuring Scylla. Some
* - node_health_check
- Gathers metrics and information on the node, checking that the node is configured correctly.
* - scylla-blocktune
- - Tunes the filesystem and block layer (e.g. block size I/O scheduler configuration) for Scylla.
+ - Tunes the filesystem and block layer (e.g. block size I/O scheduler configuration) for ScyllaDB.
* - scylla_cpuscaling_setup
- Configures the CPU frequency scaling (IOW, puts the CPU in "performance" mode, instead of the slower "powersave" mode).
* - scylla_cpuset_setup
- - Configures which CPUs the Scylla server threads run on.
+ - Configures which CPUs the ScyllaDB server threads run on.
* - scylla_fstrim
- Runs ``fstrim``, which cleans up unused blocks of data from your SSD storage device. It runs automatically if you run scylla_fstrim_set up (see below).
* - scylla_fstrim_setup
- Configures a job so that ``fstrim`` runs automatically.
* - scylla-housekeeping
- - Checks if there are new versions of Scylla available, and also shares some telemetry information for us to keep track of what versions are installed on the field.
+ - Checks if there are new versions of ScyllaDB available, and also shares some telemetry information for us to keep track of what versions are installed on the field.
* - scylla_rsyslog_setup
- Configures the "rsyslog" service, which is used to send logs to a remote server.
* - scylla_selinux_setup
- - Disables SELinux for Scylla.
+ - Disables SELinux for ScyllaDB.

.. _note-io:

.. include:: /getting-started/_common/note-io.rst

Bootloader Settings
-------------------
-If Scylla is installed on an Amazon AMI, the bootloader should provide the :code:`clocksource=tsc` and :code:`tsc=reliable` options. This enables an accurate, high-resolution `Time Stamp Counter (TSC) <https://software.intel.com/en-us/blogs/2013/06/20/eliminate-the-dreaded-clocksource-is-unstable-message-switch-to-tsc-for-a-stable>`_ for setting the system time.
+If ScyllaDB is installed on an Amazon AMI, the bootloader should provide the :code:`clocksource=tsc` and :code:`tsc=reliable` options. This enables an accurate, high-resolution `Time Stamp Counter (TSC) <https://software.intel.com/en-us/blogs/2013/06/20/eliminate-the-dreaded-clocksource-is-unstable-message-switch-to-tsc-for-a-stable>`_ for setting the system time.

This configuration is provided in the file :code:`/usr/lib/scylla/scylla_bootparam_setup`.

@@ -105,31 +105,31 @@ This configuration is provided in the file :code:`/usr/lib/scylla/scylla_bootpar

Set Up Network Time Synchronization
-----------------------------------
-It is highly recommended to enforce time synchronization between Scylla servers.
+It is highly recommended to enforce time synchronization between ScyllaDB servers.

Run :code:`ntpstat` on all nodes to check that system time is synchronized. If you are running in a virtualized environment and your system time is set on the host, you may not need to run NTP on the guest. Check the documentation for your platform.

-If you have your own time servers shared with an application using Scylla, use the same NTP configuration as for your application servers. The script :code:`/usr/lib/scylla/scylla_ntp_setup` provides sensible defaults, using Amazon NTP servers if installed on the Amazon cloud, and other pool NTP servers otherwise.
+If you have your own time servers shared with an application using ScyllaDB, use the same NTP configuration as for your application servers. The script :code:`/usr/lib/scylla/scylla_ntp_setup` provides sensible defaults, using Amazon NTP servers if installed on the Amazon cloud, and other pool NTP servers otherwise.

Set Up RAID and Filesystem
--------------------------
-Setting the file system to XFS is the most important and mandatory for production. Scylla will significantly slow down without it.
+Setting the file system to XFS is the most important and mandatory for production. ScyllaDB will significantly slow down without it.

-The script :code:`/usr/lib/scylla/scylla_raid_setup` performs the necessary RAID configuration and XFS filesystem creation for Scylla.
+The script :code:`/usr/lib/scylla/scylla_raid_setup` performs the necessary RAID configuration and XFS filesystem creation for ScyllaDB.

Arguments to the script are

* :code:`-d` specify disks for RAID
* :code:`-r` MD device name for RAID
* :code:`-u` update /etc/fstab for RAID

-On the Scylla AMI, the RAID configuration is handled automatically in the :code:`/usr/lib/scylla/scylla_prepare script`.
+On the ScyllaDB AMI, the RAID configuration is handled automatically in the :code:`/usr/lib/scylla/scylla_prepare script`.

CPU Pinning
-----------

-When installing Scylla, it is highly recommended to use the :doc:`scylla_setup </getting-started/system-configuration>` script.
-Scylla should not share CPUs with any CPU consuming process. In addition, when running Scylla on AWS, we recommend pinning all NIC IRQs to CPU0 (due to the same reason). As a result, Scylla should be prevented from running on CPU0 and its hyper-threading siblings. To verify that Scylla is pinning CPU0, use the command below:
+When installing ScyllaDB, it is highly recommended to use the :doc:`scylla_setup </getting-started/system-configuration>` script.
+ScyllaDB should not share CPUs with any CPU consuming process. In addition, when running ScyllaDB on AWS, we recommend pinning all NIC IRQs to CPU0 (due to the same reason). As a result, ScyllaDB should be prevented from running on CPU0 and its hyper-threading siblings. To verify that ScyllaDB is pinning CPU0, use the command below:
If the node has four or fewer CPUs, don't use this option.

To verify:
@@ -156,15 +156,15 @@ Networking

See :doc:`Seastar Perftune </operating-scylla/admin-tools/perftune>` for details.

-Configuring Scylla
-------------------
-Configuration for Scylla itself is in the :ref:`Scylla Configuration <admin-scylla-configuration>` section of the administration guide.
+Configuring ScyllaDB
+--------------------
+Configuration for ScyllaDB itself is in the :ref:`ScyllaDB Configuration <admin-scylla-configuration>` section of the administration guide.

Development System Configuration
--------------------------------
*The following item is not required in production.*

-When working on DPDK support for Scylla, enable hugepages.
+When working on DPDK support for ScyllaDB, enable hugepages.

.. code-block:: shell

diff --git a/docs/getting-started/config-commands.rst b/docs/getting-started/config-commands.rst
--- a/docs/getting-started/config-commands.rst
+++ b/docs/getting-started/config-commands.rst
@@ -2,27 +2,27 @@
ScyllaDB Configuration Reference
=================================

-This guide describes the commands that you can use to configure your Scylla clusters.
+This guide describes the commands that you can use to configure your ScyllaDB clusters.
The commands are all sent via the command line in a terminal and sudo or root access is not required as long as you have permission to execute in the directory.

.. caution:: You should **only** use configuration settings which are officially supported.

-A list of all Scylla commands can be obtained by running
+A list of all ScyllaDB commands can be obtained by running

.. code-block:: none

scylla --help

-.. note:: This command displays all Scylla commands as well as Seastar commands. The Seastar commands are listed as Core Options.
+.. note:: This command displays all ScyllaDB commands as well as Seastar commands. The Seastar commands are listed as Core Options.

For example:

.. code-block:: none

- Scylla version 4.2.3-0.20210104.24346215c2 with build-id 0c8faf8bb8a3a0eda9337aad98ed3a6d814a4fa9 starting ...
+ ScyllaDB version 4.2.3-0.20210104.24346215c2 with build-id 0c8faf8bb8a3a0eda9337aad98ed3a6d814a4fa9 starting ...
command used: "scylla --help"
parsed command line options: [help]
- Scylla options:
+ ScyllaDB options:
-h [ --help ] show help message
--version print version number and exit
--options-file arg configuration file (i.e.
@@ -46,10 +46,10 @@ For example:

.. note:: This is an incomplete screenshot. For the complete file, run the command in a terminal.

-Scylla Configuration Files and Scylla Commands
-----------------------------------------------
+ScyllaDB Configuration Files and ScyllaDB Commands
+--------------------------------------------------

-Some Scylla Command Line commands are derived from the `scylla.yaml <https://github.com/scylladb/scylla/blob/master/conf/scylla.yaml>`_ configuration parameters.
+Some ScyllaDB Command Line commands are derived from the `scylla.yaml <https://github.com/scylladb/scylla/blob/master/conf/scylla.yaml>`_ configuration parameters.

For example, in the case of ``cluster_name: 'Test Cluster'`` as seen in the `scylla.yaml <https://github.com/scylladb/scylla/blob/master/conf/scylla.yaml>`_ configuration parameters.

diff --git a/docs/getting-started/configure.rst b/docs/getting-started/configure.rst
--- a/docs/getting-started/configure.rst
+++ b/docs/getting-started/configure.rst
@@ -6,7 +6,7 @@ Configure ScyllaDB
<div class="panel callout radius animated">
<div class="row">
<div class="medium-3 columns">
- <h5 id="getting-started">Configure Scylla</h5>
+ <h5 id="getting-started">Configure ScyllaDB</h5>
</div>
<div class="medium-9 columns">

diff --git a/docs/getting-started/index.rst b/docs/getting-started/index.rst
--- a/docs/getting-started/index.rst
+++ b/docs/getting-started/index.rst
@@ -48,8 +48,8 @@ Getting Started
:id: "getting-started"
:class: my-panel

- * :doc:`Migrate to ScyllaDB </using-scylla/migrate-scylla>` - How to migrate your current database to Scylla
- * :doc:`Integrate with ScyllaDB </using-scylla/integrations/index>` - Integration solutions with Scylla
+ * :doc:`Migrate to ScyllaDB </using-scylla/migrate-scylla>` - How to migrate your current database to ScyllaDB
+ * :doc:`Integrate with ScyllaDB </using-scylla/integrations/index>` - Integration solutions with ScyllaDB


.. panel-box::
diff --git a/docs/getting-started/installation-common/air-gapped-install.rst b/docs/getting-started/installation-common/air-gapped-install.rst
--- a/docs/getting-started/installation-common/air-gapped-install.rst
+++ b/docs/getting-started/installation-common/air-gapped-install.rst
@@ -3,9 +3,9 @@ Air-gapped Server Installation
==============================

An air-gapped server is a server without any access to external repositories or connections to any network, including the internet.
-To install Scylla on an air-gapped server, you first need to download the relevant files from a server that is not air-gapped and then and move the files to the air-gapped servers to complete the installation.
+To install ScyllaDB on an air-gapped server, you first need to download the relevant files from a server that is not air-gapped and then and move the files to the air-gapped servers to complete the installation.

-There are two ways to install Scylla on an air-gapped server:
+There are two ways to install ScyllaDB on an air-gapped server:

-- With root privileges (recommended): download the OS specific packages (rpms and debs) and install them with the package manager (dnf and apt). See `Install Scylla on an Air-gapped Server Using the Packages (Option 2) <https://www.scylladb.com/download/?platform=tar>`_.
-- Without root privileges: using the :doc:`Scylla Unified Installer <unified-installer>`.
+- With root privileges (recommended): download the OS specific packages (rpms and debs) and install them with the package manager (dnf and apt). See `Install ScyllaDB on an Air-gapped Server Using the Packages (Option 2) <https://www.scylladb.com/download/?platform=tar>`_.
+- Without root privileges: using the :doc:`ScyllaDB Unified Installer <unified-installer>`.
diff --git a/docs/getting-started/installation-common/dev-mod.rst b/docs/getting-started/installation-common/dev-mod.rst
--- a/docs/getting-started/installation-common/dev-mod.rst
+++ b/docs/getting-started/installation-common/dev-mod.rst
@@ -1,7 +1,7 @@
ScyllaDB Developer Mode
========================

-If you want to use Scylla in developer mode you need to use the command below (using root privileges)
+If you want to use ScyllaDB in developer mode you need to use the command below (using root privileges)

``sudo scylla_dev_mode_setup --developer-mode 1``

diff --git a/docs/getting-started/installation-common/disable-housekeeping.rst b/docs/getting-started/installation-common/disable-housekeeping.rst
--- a/docs/getting-started/installation-common/disable-housekeeping.rst
+++ b/docs/getting-started/installation-common/disable-housekeeping.rst
@@ -3,11 +3,11 @@
ScyllaDB Housekeeping and how to disable it
============================================

-It is always recommended to run the latest version of Scylla Open Source or Scylla Enterprise.
+It is always recommended to run the latest version of ScyllaDB Open Source or ScyllaDB Enterprise.
The latest stable release version is always available from the `Download Center <https://www.scylladb.com/download/>`_.

-When you install Scylla, it installs by default two services: **scylla-housekeeping-restart** and **scylla-housekeeping-daily**. These services check for the latest Scylla version and prompt the user if they are using a version that is older than what is publicly available.
-Information about your Scylla deployment, including the Scylla version currently used, as well as unique user and server identifiers, are collected by a centralized service.
+When you install ScyllaDB, it installs by default two services: **scylla-housekeeping-restart** and **scylla-housekeeping-daily**. These services check for the latest ScyllaDB version and prompt the user if they are using a version that is older than what is publicly available.
+Information about your ScyllaDB deployment, including the ScyllaDB version currently used, as well as unique user and server identifiers, are collected by a centralized service.

To disable these service, update file ``/etc/scylla.d/housekeeping.cfg`` as follow: ``check-version: False``

diff --git a/docs/getting-started/logging.rst b/docs/getting-started/logging.rst
--- a/docs/getting-started/logging.rst
+++ b/docs/getting-started/logging.rst
@@ -3,7 +3,7 @@ Logging

Logging with the systemd journal (CentOS, Amazon AMI, Ubuntu, Debian)
---------------------------------------------------------------------
-On distributions with systemd, Scylla logs are written to the `systemd journal <http://www.freedesktop.org/software/systemd/man/systemd-journald.service.html>`_. You can retrieve log entries with the `journalctl <http://www.freedesktop.org/software/systemd/man/journalctl.html>`_ command.
+On distributions with systemd, ScyllaDB logs are written to the `systemd journal <http://www.freedesktop.org/software/systemd/man/systemd-journald.service.html>`_. You can retrieve log entries with the `journalctl <http://www.freedesktop.org/software/systemd/man/journalctl.html>`_ command.

Listed below are a few useful examples.

@@ -19,7 +19,7 @@ Listed below are a few useful examples.

journalctl _COMM=scylla

-* filter only Scylla logs by priority:
+* filter only ScyllaDB logs by priority:

.. code-block:: shell

@@ -29,7 +29,7 @@ Listed below are a few useful examples.

journalctl _COMM=scylla -p warning

-* filter only Scylla logs by date:
+* filter only ScyllaDB logs by date:

.. code-block:: shell

@@ -43,17 +43,17 @@ Listed below are a few useful examples.

journalctl _COMM=scylla --since yesterday

-* filter only Scylla logs since last server boot:
+* filter only ScyllaDB logs since last server boot:

.. code-block:: shell

journalctl _COMM=scylla -b

Logging on Ubuntu 14.04
-----------------------
-On Ubuntu 14.04, Scylla writes its initial boot message into :code:`/var/log/upstart/scylla-server.log`.
+On Ubuntu 14.04, ScyllaDB writes its initial boot message into :code:`/var/log/upstart/scylla-server.log`.

-After Scylla has started, logs are stored in :code:`/var/log/syslog`. Scylla logs can be filter by creating a :code:`rsyslog` configuration file with the following rule (for example, in :code:`/etc/rsyslog.d/10-scylla.conf`)
+After ScyllaDB has started, logs are stored in :code:`/var/log/syslog`. ScyllaDB logs can be filter by creating a :code:`rsyslog` configuration file with the following rule (for example, in :code:`/etc/rsyslog.d/10-scylla.conf`)

.. code-block:: shell

@@ -67,11 +67,11 @@ And then creating the log file with the correct permissions and restarting the s
install -o syslog -g adm -m 0640 /dev/null /var/log/scylla/scylla.log
service rsyslog restart

-This will send Scylla only logs to :code:`/var/log/scylla/scylla.log`
+This will send ScyllaDB only logs to :code:`/var/log/scylla/scylla.log`

Logging on Docker
-----------------
-Starting from Scylla 1.3, `Scylla Docker <https://hub.docker.com/r/scylladb/scylla/>`_, you should use :code:`docker logs` command to access Scylla server and JMX proxy logs
+Starting from ScyllaDB 1.3, `ScyllaDB Docker <https://hub.docker.com/r/scylladb/scylla/>`_, you should use :code:`docker logs` command to access ScyllaDB server and JMX proxy logs


.. include:: /rst_include/advance-index.rst
diff --git a/docs/getting-started/scylla-in-a-shared-environment.rst b/docs/getting-started/scylla-in-a-shared-environment.rst
--- a/docs/getting-started/scylla-in-a-shared-environment.rst
+++ b/docs/getting-started/scylla-in-a-shared-environment.rst
@@ -3,80 +3,80 @@
ScyllaDB in a Shared Environment
=================================

-Scylla is designed to utilize all of the resources on the machine. It
+ScyllaDB is designed to utilize all of the resources on the machine. It
runs on: disk and network bandwidth, RAM, and CPU. This allows you to
achieve maximum performance with a minimal node count. In development
and test, however, your nodes might be using a shared machine, which
-Scylla cannot dominate. This article explains how to configure Scylla
+ScyllaDB cannot dominate. This article explains how to configure ScyllaDB
for shared environments. For some production environments, these settings
may be preferred as well.

-Note that a Docker image is a viable and even simpler option - `Scylla
+Note that a Docker image is a viable and even simpler option - `ScyllaDB
on dockerhub <https://hub.docker.com/r/scylladb/scylla/>`_


Memory
------

-The most critical resource that Scylla consumes is memory. By default,
-when Scylla starts up, it inspects the node's hardware configuration and
+The most critical resource that ScyllaDB consumes is memory. By default,
+when ScyllaDB starts up, it inspects the node's hardware configuration and
claims *all* memory to itself, leaving some reserve for the operating
system (OS). This is in contrast to most open-source databases that
leave most memory for the OS, but is similar to most commercial
databases.

In a shared environment, particularly on a desktop or laptop, gobbling
-up all the machine's memory can reduce the user experience, so Scylla
+up all the machine's memory can reduce the user experience, so ScyllaDB
allows reducing its memory usage to a given quantity.

On Ubuntu, open a terminal and edit ``/etc/default/scylla-server``, and add ``--memory 2G``
-to restrict Scylla to 2 gigabytes of RAM.
+to restrict ScyllaDB to 2 gigabytes of RAM.

On Red Hat / CentOS, open a terminal and edit ``/etc/sysconfig/scylla-server``, and add
-``--memory 2G`` to restrict Scylla to 2 gigabytes of RAM.
+``--memory 2G`` to restrict ScyllaDB to 2 gigabytes of RAM.

-If starting Scylla from the command line, simply append ``--memory 2G``
+If starting ScyllaDB from the command line, simply append ``--memory 2G``
to your command line.

CPU
---

-By default, Scylla will utilize *all* of your processors (in some
+By default, ScyllaDB will utilize *all* of your processors (in some
configurations, particularly on Amazon AWS, it may leave a core for the
-operating system). In addition, Scylla will pin its threads to specific
+operating system). In addition, ScyllaDB will pin its threads to specific
cores in order to maximize the utilization of the processor on-chip
caches. On a dedicated node, this allows maximum throughput, but on a
desktop or laptop, it can cause a sluggish user interface.

-Scylla offers two options to restrict its CPU utilization:
+ScyllaDB offers two options to restrict its CPU utilization:

-- ``--smp N`` restricts Scylla to N logical cores; for example with
- ``--smp 2`` Scylla will not utilize more than two logical cores
-- ``--overprovisioned`` tells Scylla that the machine it is running on
- is used by other processes; so Scylla will not pin its threads or
+- ``--smp N`` restricts ScyllaDB to N logical cores; for example with
+ ``--smp 2`` ScyllaDB will not utilize more than two logical cores
+- ``--overprovisioned`` tells ScyllaDB that the machine it is running on
+ is used by other processes; so ScyllaDB will not pin its threads or
memory, and will reduce the amount of polling it does to a minimum.

On Ubuntu edit ``/etc/default/scylla-server``, and add
-``--smp 2 --overprovisioned`` to restrict Scylla to 2 logical cores.
+``--smp 2 --overprovisioned`` to restrict ScyllaDB to 2 logical cores.

On Red Hat / CentOS edit ``/etc/sysconfig/scylla-server``, and add
-``--smp 2 --overprovisioned`` to restrict Scylla to 2 logical cores.
+``--smp 2 --overprovisioned`` to restrict ScyllaDB to 2 logical cores.

-If starting Scylla from the command line, simply append
+If starting ScyllaDB from the command line, simply append
``--smp 2 --overprovisioned`` to your command line.

Other Restrictions
------------------

-When starting up, Scylla will check the hardware and operating system
-configuration to verify that it is compatible with Scylla's performance requirements. See :doc:`developer mode </getting-started/installation-common/dev-mod>` for more instructions.
+When starting up, ScyllaDB will check the hardware and operating system
+configuration to verify that it is compatible with ScyllaDB's performance requirements. See :doc:`developer mode </getting-started/installation-common/dev-mod>` for more instructions.

Summary
-------

-Scylla comes out of the box ready for production use with maximum
+ScyllaDB comes out of the box ready for production use with maximum
performance but may need to be tuned for development or test uses. This
-tuning is simple to apply and results in a Scylla server that can
+tuning is simple to apply and results in a ScyllaDB server that can
coexist with other processes or a GUI on the system.

.. include:: /rst_include/advance-index.rst
diff --git a/docs/getting-started/system-requirements.rst b/docs/getting-started/system-requirements.rst
--- a/docs/getting-started/system-requirements.rst
+++ b/docs/getting-started/system-requirements.rst
@@ -52,7 +52,7 @@ In terms of the number of cores, any number will work since ScyllaDB scales up w
A practical approach is to use a large number of cores as long as the hardware price remains reasonable.
Between 20-60 logical cores (including hyperthreading) is a recommended number. However, any number will fit.
When using virtual machines, containers, or the public cloud, remember that each virtual CPU is mapped to a single logical core, or thread.
-Allow ScyllaDB to run independently without any additional CPU intensive tasks on the same server/cores as Scylla.
+Allow ScyllaDB to run independently without any additional CPU intensive tasks on the same server/cores as ScyllaDB.

.. _system-requirements-memory:

diff --git a/docs/index.rst b/docs/index.rst
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -7,7 +7,7 @@ ScyllaDB Open Source Documentation
.. meta::
:title: ScyllaDB Open Source Documentation
:description: ScyllaDB Open Source Documentation
- :keywords: ScyllaDB Open Source, Scylla Open Source, Scylla docs, ScyllaDB documentation, Scylla Documentation
+ :keywords: ScyllaDB Open Source, ScyllaDB Open Source, ScyllaDB docs, ScyllaDB documentation, ScyllaDB Documentation

About This User Guide
-----------------------
diff --git a/docs/kb/_common/kb-article-template.rst b/docs/kb/_common/kb-article-template.rst
--- a/docs/kb/_common/kb-article-template.rst
+++ b/docs/kb/_common/kb-article-template.rst
@@ -12,9 +12,9 @@ Title in Title Caps

.. in 1-3 words what will users learn by reading this article?

-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**

-.. Choose (Application Developer, Scylla Administrator, Internal, All)
+.. Choose (Application Developer, ScyllaDB Administrator, Internal, All)

Synopsis
--------
diff --git a/docs/kb/cdc-experimental-upgrade.rst b/docs/kb/cdc-experimental-upgrade.rst
--- a/docs/kb/cdc-experimental-upgrade.rst
+++ b/docs/kb/cdc-experimental-upgrade.rst
@@ -2,7 +2,7 @@
Upgrading from experimental CDC
===============================

-If you used CDC in Scylla 4.2 or earlier by enabling the experimental feature and you upgrade to 4.3, you must perform additional steps for CDC to work properly.
+If you used CDC in ScyllaDB 4.2 or earlier by enabling the experimental feature and you upgrade to 4.3, you must perform additional steps for CDC to work properly.

First, if you enabled CDC on any table (using ``with cdc = { ... }``), you should stop all writes to this table. Then disable CDC before the upgrade:

@@ -18,7 +18,7 @@ This should work even if you already upgraded, but preferably disable CDC on all

After disabling CDC and finishing the upgrade you can safely re-enable it.

-The next step is running ``nodetool checkAndRepairCdcStreams``. Up to this point, Scylla may have periodically reported the following errors in its logs:
+The next step is running ``nodetool checkAndRepairCdcStreams``. Up to this point, ScyllaDB may have periodically reported the following errors in its logs:

.. code-block:: none

diff --git a/docs/kb/compaction.rst b/docs/kb/compaction.rst
--- a/docs/kb/compaction.rst
+++ b/docs/kb/compaction.rst
@@ -4,16 +4,16 @@ Compaction

This document gives a high level overview of Compaction, focusing on what compaction is, and how it works. There is a different document that covers the :doc:`CQL syntax </cql/compaction>` for setting a compaction strategy. There is also another document, :doc:`Compaction Strategy Matrix </architecture/compaction/compaction-strategies>`, that covers how to decide which strategy works best.

-How Scylla Writes Data
-----------------------
+How ScyllaDB Writes Data
+------------------------

-Scylla’s write path follows the well-known **Log Structured Merge (LSM)** design for efficient writes that are immediately available for reads. Scylla is not the first project to use this method. Popular projects to use this method include Lucene Search Engine, Google BigTable, and Apache Cassandra.
+ScyllaDB’s write path follows the well-known **Log Structured Merge (LSM)** design for efficient writes that are immediately available for reads. ScyllaDB is not the first project to use this method. Popular projects to use this method include Lucene Search Engine, Google BigTable, and Apache Cassandra.

-Scylla writes its updates to a :term:`memory table (MemTable)<MemTable>`, and when that becomes too big, it is flushed to a new file. This file is sorted to make it easy to search and later merge. This is why the tables are known as Sorted String Tables or :term:`SSTables<SSTable>`.
+ScyllaDB writes its updates to a :term:`memory table (MemTable)<MemTable>`, and when that becomes too big, it is flushed to a new file. This file is sorted to make it easy to search and later merge. This is why the tables are known as Sorted String Tables or :term:`SSTables<SSTable>`.

.. image:: write-path-image-memtable-sstable.png

-In time, two major problems start to appear. First, data in one SSTable which is later modified or deleted in another SSTable wastes space as both tables are present in the system. Second, when data is split across many SSTables, read requests are processed slower as many SSTables need to be read. Scylla mitigates the second problem by using a bloom filter and other techniques to avoid reading from SSTables that do not include the desired partition. However, as the number of SSTables grows, inevitably so do the number of disk blocks from which we need to read on every read query. For these reasons, as soon as enough SSTables have accumulated, Scylla performs a :term:`compaction<Compaction>`.
+In time, two major problems start to appear. First, data in one SSTable which is later modified or deleted in another SSTable wastes space as both tables are present in the system. Second, when data is split across many SSTables, read requests are processed slower as many SSTables need to be read. ScyllaDB mitigates the second problem by using a bloom filter and other techniques to avoid reading from SSTables that do not include the desired partition. However, as the number of SSTables grows, inevitably so do the number of disk blocks from which we need to read on every read query. For these reasons, as soon as enough SSTables have accumulated, ScyllaDB performs a :term:`compaction<Compaction>`.


Compaction Overview
@@ -24,17 +24,17 @@ Compaction merges several SSTables into new SSTable(s) which contain(s) only the
There are two types of compactions:

* Minor Compaction
- Scylla automatically triggers a compaction of some SSTables, according to a :term:`compaction strategy<Compaction Strategy>` (as described below). This is the recommended method.
+ ScyllaDB automatically triggers a compaction of some SSTables, according to a :term:`compaction strategy<Compaction Strategy>` (as described below). This is the recommended method.

* Major Compaction
A user triggers (using nodetool) a compaction over all SSTables, merging the individual tables according to the selected compaction strategy.

-.. caution:: It is always best to allow Scylla to automatically run minor compactions. Major compactions can exhaust resources, increase operational costs, and take up valuable disk space. This requires you to have 50% more disk space than your data unless you are using :ref:`Incremental compaction strategy (ICS) <incremental-compaction-strategy-ics>`.
+.. caution:: It is always best to allow ScyllaDB to automatically run minor compactions. Major compactions can exhaust resources, increase operational costs, and take up valuable disk space. This requires you to have 50% more disk space than your data unless you are using :ref:`Incremental compaction strategy (ICS) <incremental-compaction-strategy-ics>`.

View Compaction Statistics
--------------------------

-Scylla has tools you can use to see the status of your compactions. These include nodetool (:doc:`compactionhistory </operating-scylla/nodetool-commands/compactionhistory>` and :doc:`compactionstats </operating-scylla/nodetool-commands/compactionstats>`) and the Grafana dashboards which are part of the `Scylla Monitoring Stack <https://monitoring.docs.scylladb.com/>`_ which display the compaction statistics on a per cluster and per node basis. Compaction errors can be seen in the `logs <https://manager.docs.scylladb.com/stable/config/scylla-manager-config.html>`_.
+ScyllaDB has tools you can use to see the status of your compactions. These include nodetool (:doc:`compactionhistory </operating-scylla/nodetool-commands/compactionhistory>` and :doc:`compactionstats </operating-scylla/nodetool-commands/compactionstats>`) and the Grafana dashboards which are part of the `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/>`_ which display the compaction statistics on a per cluster and per node basis. Compaction errors can be seen in the `logs <https://manager.docs.scylladb.com/stable/config/scylla-manager-config.html>`_.

Compaction strategy
-------------------
@@ -111,7 +111,7 @@ Temporary Fallback to STCS

When new data is written very quickly, the Leveled Compaction strategy may be temporarily unable to keep up with the demand. This can result in an accumulation of a large number of SSTables in L0 which in turn create very slow reads as all read requests read from all SSTables in L0. So as an emergency measure, when the number of SSTables in L0 grows to 32, LCS falls back to STCS to quickly reduce the number of SSTables in L0. Eventually, LCS will move this data again to fixed-sized SSTables in higher levels.

-Likewise, when :term:`bootstrapping<Bootstrap>` a new node, SSTables are streamed from other nodes. The level of the remote SSTable is kept to avoid many compactions until after the bootstrap is done. During the bootstrap, the new node receives regular write requests while it is streaming the data from the remote node. Just like any other write, these writes are flushed to L0. If Scylla did an LCS compaction on these L0 SSTables and created SSTables in higher level, this could have blocked the remote SSTables from going to the correct level (remember that SSTables in a run must not have overlapping key ranges). To remedy this from happening, Scylla compacts the tables using STCS only in L0 until the bootstrap process is complete. Once done, all resumes as normal under LCS.
+Likewise, when :term:`bootstrapping<Bootstrap>` a new node, SSTables are streamed from other nodes. The level of the remote SSTable is kept to avoid many compactions until after the bootstrap is done. During the bootstrap, the new node receives regular write requests while it is streaming the data from the remote node. Just like any other write, these writes are flushed to L0. If ScyllaDB did an LCS compaction on these L0 SSTables and created SSTables in higher level, this could have blocked the remote SSTables from going to the correct level (remember that SSTables in a run must not have overlapping key ranges). To remedy this from happening, ScyllaDB compacts the tables using STCS only in L0 until the bootstrap process is complete. Once done, all resumes as normal under LCS.

.. _incremental-compaction-strategy-ics:

@@ -180,8 +180,8 @@ References

* :doc:`How to Choose a Compaction Strategy </architecture/compaction/compaction-strategies>`.

-* `Blog: Scylla’s Compaction Strategies Series: Write Amplification in Leveled Compaction <https://www.scylladb.com/2018/01/31/compaction-series-leveled-compaction/>`_
+* `Blog: ScyllaDB’s Compaction Strategies Series: Write Amplification in Leveled Compaction <https://www.scylladb.com/2018/01/31/compaction-series-leveled-compaction/>`_

-* `Blog: Scylla’s Compaction Strategies Series: Space Amplification in Size-Tiered Compaction <https://www.scylladb.com/2018/01/17/compaction-series-space-amplification/>`_
+* `Blog: ScyllaDB’s Compaction Strategies Series: Space Amplification in Size-Tiered Compaction <https://www.scylladb.com/2018/01/17/compaction-series-space-amplification/>`_

* Size Tiered: `Shrikant Bang’s Notes <https://shrikantbang.wordpress.com/2014/04/22/size-tiered-compaction-strategy-in-apache-cassandra/>`_
diff --git a/docs/kb/count-all-rows.rst b/docs/kb/count-all-rows.rst
--- a/docs/kb/count-all-rows.rst
+++ b/docs/kb/count-all-rows.rst
@@ -13,7 +13,7 @@ Trying to count all rows in a table using
may fail with the **ReadTimeout** error.

COUNT() runs a full-scan query on all nodes, which might take a long time to finish. As a result, the count time may be greater than the ScyllaDB query timeout.
-One way to prevent that issue in Scylla 4.4 or later is to increase the timeout for the query using the :ref:`USING TIMEOUT <using-timeout>` directive, for example:
+One way to prevent that issue in ScyllaDB 4.4 or later is to increase the timeout for the query using the :ref:`USING TIMEOUT <using-timeout>` directive, for example:


.. code-block:: cql
diff --git a/docs/kb/cqlsh-more.rst b/docs/kb/cqlsh-more.rst
--- a/docs/kb/cqlsh-more.rst
+++ b/docs/kb/cqlsh-more.rst
@@ -2,7 +2,7 @@
CQL Query Does Not Display Entire Result Set
=============================================

-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**

If you send a cqlsh query similar to:

@@ -12,7 +12,7 @@ If you send a cqlsh query similar to:

and the results show a single row with ``--More--``, the ``--More--`` indicates that there are additional pages - if you click Enter, additional rows are displayed.

-As the query is using paging (from cqlsh by default page size is 100) - Scylla uses this information internally and will fetch internally page size results. Some of these may be discarded and not returned to you or the output may reveal blank pages where you will see the ``--More--`` prompt causing you to page through empty pages. Neither of these outputs is desired.
+As the query is using paging (from cqlsh by default page size is 100) - ScyllaDB uses this information internally and will fetch internally page size results. Some of these may be discarded and not returned to you or the output may reveal blank pages where you will see the ``--More--`` prompt causing you to page through empty pages. Neither of these outputs is desired.

If you need text result of this query as a single run, without the page delimiter, you can :ref:`turn off paging <cqlsh-paging>`.

diff --git a/docs/kb/cqlsh-results.rst b/docs/kb/cqlsh-results.rst
--- a/docs/kb/cqlsh-results.rst
+++ b/docs/kb/cqlsh-results.rst
@@ -9,7 +9,7 @@ When CQLSh query returns partial results with followed by "More"
**Topic: When results are missing from query**


-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**

Synopsis
---------
@@ -22,7 +22,7 @@ If you send a cqlsh query similar to:

and the results show a single row with ``--More``, the ``--More--`` indicates that there are additional pages - if you click Enter, additional rows are displayed.

-As the query is using paging (from cqlsh by default page size is 100) - Scylla uses this information internally and will fetch internally page size results. Some of these may be discarded and not returned to you or the output may reveal blank pages where you will see the ``more`` prompt causing you to page through empty pages. Neither of these outputs is desired.
+As the query is using paging (from cqlsh by default page size is 100) - ScyllaDB uses this information internally and will fetch internally page size results. Some of these may be discarded and not returned to you or the output may reveal blank pages where you will see the ``more`` prompt causing you to page through empty pages. Neither of these outputs is desired.

If you need this in query as a single result you can turn off paging and include paging off in the query.

diff --git a/docs/kb/custom-user.rst b/docs/kb/custom-user.rst
--- a/docs/kb/custom-user.rst
+++ b/docs/kb/custom-user.rst
@@ -1,7 +1,7 @@
-Run Scylla and supporting services as a custom user:group
-=========================================================
+Run ScyllaDB and supporting services as a custom user:group
+===========================================================
**Topic: Planning and setup**
-By default, Scylla runs as user ``scylla`` in group ``scylla``. The following procedure will allow you to use a custom user and group to run Scylla.
+By default, ScyllaDB runs as user ``scylla`` in group ``scylla``. The following procedure will allow you to use a custom user and group to run ScyllaDB.
1. Create the new user and update file permissions

.. code-block:: sh
@@ -38,7 +38,7 @@ By default, Scylla runs as user ``scylla`` in group ``scylla``. The following pr
User=test
Group=test

-6. Reload the daemon settings and start Scylla and node_exporter
+6. Reload the daemon settings and start ScyllaDB and node_exporter

.. code-block:: sh

diff --git a/docs/kb/customizing-cpuset.rst b/docs/kb/customizing-cpuset.rst
--- a/docs/kb/customizing-cpuset.rst
+++ b/docs/kb/customizing-cpuset.rst
@@ -17,7 +17,7 @@ Example 1
^^^^^^^^^^

* 16 CPUs system.
-* You want to run Scylla on CPUs 3, 4, 5, and have IRQs handled on the same CPUs, while allowing other apps on the same machine to benefit
+* You want to run ScyllaDB on CPUs 3, 4, 5, and have IRQs handled on the same CPUs, while allowing other apps on the same machine to benefit
from XFS/RFS/RPS from ``eth5``.

**cpuset.conf:**
@@ -45,7 +45,7 @@ Example 2
^^^^^^^^^^

* 16 CPUs system.
-* You want to run Scylla on CPUs 3, 4, 5, and have IRQs handled on the same CPUs, and Scylla is going to be the only application
+* You want to run ScyllaDB on CPUs 3, 4, 5, and have IRQs handled on the same CPUs, and ScyllaDB is going to be the only application
that will use ``eth5``.

**cpuset.conf:**
@@ -73,7 +73,7 @@ Example 3
^^^^^^^^^^

* 16 CPUs system.
-* You want to run Scylla on CPUs 3, 4, 5, and IRQs handled on CPUs 6,7,8, while allowing other apps on the same machine to benefit
+* You want to run ScyllaDB on CPUs 3, 4, 5, and IRQs handled on CPUs 6,7,8, while allowing other apps on the same machine to benefit
from XFS/RFS/RPS from ``eth5``.


diff --git a/docs/kb/decode-stack-trace.rst b/docs/kb/decode-stack-trace.rst
--- a/docs/kb/decode-stack-trace.rst
+++ b/docs/kb/decode-stack-trace.rst
@@ -2,24 +2,24 @@
Decoding Stack Traces
=====================

-**Topic: Decoding stack traces in Scylla logs**
+**Topic: Decoding stack traces in ScyllaDB logs**

-**Environment: Any Scylla setup on any supported OS**
+**Environment: Any ScyllaDB setup on any supported OS**

**Audience: All**

Synopsis
--------

-This article describes how to decode the stack traces found in Scylla logs.
+This article describes how to decode the stack traces found in ScyllaDB logs.


What are Stack Traces?
----------------------

-Stack traces can appear in the logs due to various errors or in the course of regular database operation. It is useful to be able to decode the trace in order to understand what exactly happened. Decoding the stack trace requires the Debug binaries for the specific Scylla build and OS in use are installed.
+Stack traces can appear in the logs due to various errors or in the course of regular database operation. It is useful to be able to decode the trace in order to understand what exactly happened. Decoding the stack trace requires the Debug binaries for the specific ScyllaDB build and OS in use are installed.

-Note that sharing the stack trace as part of your support ticket or Github issue, helps the Scylla support team to understand the issue better.
+Note that sharing the stack trace as part of your support ticket or Github issue, helps the ScyllaDB support team to understand the issue better.


Install Debug Binary files
@@ -31,27 +31,27 @@ Install the Debug binaries according to your OS distribution

.. group-tab:: RPM based distributions

- For Scylla Enterprise:
+ For ScyllaDB Enterprise:

.. code-block:: none

yum install scylla-enterprise-debuginfo

- For Scylla Open Source:
+ For ScyllaDB Open Source:

.. code-block:: none

yum install scylla-debuginfo

.. group-tab:: DEB based distributions

- For Scylla Enterprise:
+ For ScyllaDB Enterprise:

.. code-block:: none

apt-get install scylla-enterprise-server-dbg

- For Scylla Open Source:
+ For ScyllaDB Open Source:

.. code-block:: none

@@ -114,7 +114,7 @@ Locate and Analyze the Logs

find . -name "scylla*.debug"

- With Scylla 4.1 for example, returns:
+ With ScyllaDB 4.1 for example, returns:

.. code-block:: shell

diff --git a/docs/kb/disk-utilization.rst b/docs/kb/disk-utilization.rst
--- a/docs/kb/disk-utilization.rst
+++ b/docs/kb/disk-utilization.rst
@@ -6,10 +6,10 @@ Snapshots and Disk Utilization

**Learn: understand how nodetool snapshot utilizes disk space**

-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**


-When you create a snapshot using :doc:`nodetool snapshot </operating-scylla/nodetool-commands/snapshot>` command, Scylla is not going to copy existing SStables to the snapshot directory as one could have expected. Instead, it is going to create hard links to them. While this may seem trivial, what should be noted is the following:
+When you create a snapshot using :doc:`nodetool snapshot </operating-scylla/nodetool-commands/snapshot>` command, ScyllaDB is not going to copy existing SStables to the snapshot directory as one could have expected. Instead, it is going to create hard links to them. While this may seem trivial, what should be noted is the following:

* The snapshot disk space at first will start at zero and will grow to become equal to the size of the node data set when the snapshot was created. So at the beginning there will not be any significant increase in the disk space utilization.
* While it may seem plausible to believe that the snapshot image is immediately created, the snapshot eventually grows to its expected size
diff --git a/docs/kb/dpdk-hardware.rst b/docs/kb/dpdk-hardware.rst
--- a/docs/kb/dpdk-hardware.rst
+++ b/docs/kb/dpdk-hardware.rst
@@ -2,11 +2,11 @@ DPDK mode
=========
**Topic: Planning and setup**

-**Learn: How to select networking hardware for Scylla**
+**Learn: How to select networking hardware for ScyllaDB**

-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**

-Scylla is designed to use the Seastar framework, which uses the Data Plane Development Kit (DPDK) to drive NIC hardware directly, instead of relying on the kernel’s network stack. This provides an enormous performance boost for Scylla. Scylla and DPDK also rely on the Linux “hugepages” feature to minimize overhead on memory allocations. DPDK is supported on a variety of high-performance network devices.
+ScyllaDB is designed to use the Seastar framework, which uses the Data Plane Development Kit (DPDK) to drive NIC hardware directly, instead of relying on the kernel’s network stack. This provides an enormous performance boost for ScyllaDB. ScyllaDB and DPDK also rely on the Linux “hugepages” feature to minimize overhead on memory allocations. DPDK is supported on a variety of high-performance network devices.

.. role:: raw-html(raw)
:format: html
@@ -19,7 +19,7 @@ Scylla is designed to use the Seastar framework, which uses the Data Plane Devel
|Intel |i40e (X710, XL710) | :raw-html:`<span class="icon-yes"/>` |
+------+---------------------------------------------------+--------------------------------------+

-Scylla RPM packages are built with DPDK support, but the package defaults to POSIX networking mode (see Administration Guide). To enable DPDK, edit ``/etc/sysconfig/scylla-server`` and edit the following lines:
+ScyllaDB RPM packages are built with DPDK support, but the package defaults to POSIX networking mode (see Administration Guide). To enable DPDK, edit ``/etc/sysconfig/scylla-server`` and edit the following lines:

.. code-block:: ini

diff --git a/docs/kb/flamegraph.rst b/docs/kb/flamegraph.rst
--- a/docs/kb/flamegraph.rst
+++ b/docs/kb/flamegraph.rst
@@ -5,8 +5,8 @@ Debug your database with Flame Graphs
Flame Graphs are used as a debugging tool to identify latency and the part of the execution path that takes most of the CPU time.
Use Flame Graphs when you:

-* Need to understand which Scylla code path/functions are using the most time. For instance, when you have latency issues.
-* Need to compare time spent in particular Scylla code paths/functions on different shards. For instance, when you have latency issues on one CPU but not on the other.
+* Need to understand which ScyllaDB code path/functions are using the most time. For instance, when you have latency issues.
+* Need to compare time spent in particular ScyllaDB code paths/functions on different shards. For instance, when you have latency issues on one CPU but not on the other.

Run a Flame Graph
-----------------
@@ -28,7 +28,7 @@ Run a Flame Graph
git clone https://github.com/brendangregg/FlameGraph
cd FlameGraph

-#. Run the following perf commands, using :doc:`Map CPU to Scylla Shards </kb/map-cpu>` and :doc:`Using the perf utility with Scylla </kb/use-perf>` for reference.
+#. Run the following perf commands, using :doc:`Map CPU to ScyllaDB Shards </kb/map-cpu>` and :doc:`Using the perf utility with ScyllaDB </kb/use-perf>` for reference.

.. code-block:: shell

@@ -43,7 +43,7 @@ Run a Flame Graph
Tips
----

-* On the CPU you are recording, try to load Scylla to consume 100% of the CPU runtime. Otherwise you’ll see a lot of OS functions related to the idle time handling
+* On the CPU you are recording, try to load ScyllaDB to consume 100% of the CPU runtime. Otherwise you’ll see a lot of OS functions related to the idle time handling

* Recording on all shards (e.g. using “perf record” -p parameter) may lead to confusing results recording the same symbol called from different threads (shards). This is not recommended.

diff --git a/docs/kb/gc-grace-seconds.rst b/docs/kb/gc-grace-seconds.rst
--- a/docs/kb/gc-grace-seconds.rst
+++ b/docs/kb/gc-grace-seconds.rst
@@ -8,7 +8,7 @@ How to Change gc_grace_seconds for a Table

How to change (reduce) gc_grace_seconds parameter of the table

-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**


Issue
diff --git a/docs/kb/gossip.rst b/docs/kb/gossip.rst
--- a/docs/kb/gossip.rst
+++ b/docs/kb/gossip.rst
@@ -1,10 +1,10 @@
-Gossip in Scylla
-================
+Gossip in ScyllaDB
+==================
**Topic: Internals**

**Audience: Devops professionals, architects**

-Scylla, like Apache Cassandra, uses a type of protocol called “gossip” to exchange metadata about the identities of nodes in a cluster and whether nodes are up or down. Of course, since there is no single point of failure there can be no single registry of node state, so nodes must share information among themselves.
+ScyllaDB, like Apache Cassandra, uses a type of protocol called “gossip” to exchange metadata about the identities of nodes in a cluster and whether nodes are up or down. Of course, since there is no single point of failure there can be no single registry of node state, so nodes must share information among themselves.

Gossip protocols are only required in distributed systems so are probably new to most administrators. `According to Wikipedia <https://en.wikipedia.org/wiki/Gossip_protocol>`_, the ideal gossip protocol has several qualities:

@@ -16,7 +16,7 @@ Gossip protocols are only required in distributed systems so are probably new to
* There is some form of randomness in the peer selection.
* Due to the replication there is an implicit redundancy of the delivered information.

-Individual gossip interactions in Scylla, like Apache Cassandra, are relatively infrequent and simple.
+Individual gossip interactions in ScyllaDB, like Apache Cassandra, are relatively infrequent and simple.
Each node, once per second, randomly selects 1 to 3 nodes to interact with.

Each node runs the gossip protocol once per second, but the gossip runs are not synchronized across the cluster.
@@ -39,13 +39,13 @@ A heart_beat_state contains integers for generation and “version number”. Th

A round of gossip is designed to minimize the amount of data sent, while resolving any conflicts between the node state data on the two gossiping nodes. In the gossip_digest_syn message, Node A sends a gossip digest: a list of all its known nodes, generations, and versions. Node B compares generation and version to its known nodes, and, in the gossip_digest_ack message, sends any of its own data that differ, along with its own digest. Finally, Node A replies with any state differences between its known state and Node B’s digest.

-Scylla gossip implementation
-----------------------------
-Scylla gossip messages run over the Scylla messaging_service, along with all other inter-node traffic including sending mutations, and streaming of data. Scylla’s messaging_service runs on the Seastar RPC service. Seastar is the scalable software framework for multicore systems that Scylla uses. If no TCP connection is up between a pair of nodes, messaging_service will create a new one. If it is up already, messaging service will use the existing one.
+ScyllaDB gossip implementation
+------------------------------
+ScyllaDB gossip messages run over the ScyllaDB messaging_service, along with all other inter-node traffic including sending mutations, and streaming of data. ScyllaDB’s messaging_service runs on the Seastar RPC service. Seastar is the scalable software framework for multicore systems that ScyllaDB uses. If no TCP connection is up between a pair of nodes, messaging_service will create a new one. If it is up already, messaging service will use the existing one.

Gossip on multicore
-------------------
-Each Scylla node consists of several independent shards, one per core, which operate on a shared-nothing basis and communicate without locking. Internally, the gossip component, which runs on CPU 0 only, needs to have connections forwarded from other shards. The node state data, shared by gossip, is replicated to the other shards.
+Each ScyllaDB node consists of several independent shards, one per core, which operate on a shared-nothing basis and communicate without locking. Internally, the gossip component, which runs on CPU 0 only, needs to have connections forwarded from other shards. The node state data, shared by gossip, is replicated to the other shards.

The gossip protocol provides important advantages especially for large clusters. Compared to “flooding” information across nodes, it can synchronize data faster, and allow for fast recovery when a new node is down or a node is returned to service. Nodes only mark other nodes as down if an actual failure is detected, but gossip quickly shares the good news of a node coming back up.

diff --git a/docs/kb/increase-permission-cache.rst b/docs/kb/increase-permission-cache.rst
--- a/docs/kb/increase-permission-cache.rst
+++ b/docs/kb/increase-permission-cache.rst
@@ -4,7 +4,7 @@ Increase Permission Cache to Avoid Non-paged Queries

**Topic: Mitigate non-paged queries coming from connection authentications**

-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**



diff --git a/docs/kb/index.rst b/docs/kb/index.rst
--- a/docs/kb/index.rst
+++ b/docs/kb/index.rst
@@ -13,62 +13,62 @@ Knowledge Base
:id: "getting-started"
:class: my-panel

- * :doc:`Scylla Seed Nodes </kb/seed-nodes>` - Introduction on the purpose and role of Seed Nodes in Scylla as well as configuration tips.
- * :doc:`Compaction </kb/compaction>` - To free up disk space and speed up reads, Scylla must do compaction operations.
+ * :doc:`ScyllaDB Seed Nodes </kb/seed-nodes>` - Introduction on the purpose and role of Seed Nodes in ScyllaDB as well as configuration tips.
+ * :doc:`Compaction </kb/compaction>` - To free up disk space and speed up reads, ScyllaDB must do compaction operations.
* :doc:`DPDK mode </kb/dpdk-hardware>` - Learn to select and configure networking for DPDK mode
- * :doc:`POSIX networking for Scylla </kb/posix>` - Scylla's POSIX mode works on all physical and virtual network devices and is useful for development work.
+ * :doc:`POSIX networking for ScyllaDB </kb/posix>` - ScyllaDB's POSIX mode works on all physical and virtual network devices and is useful for development work.
* :doc:`System Limits </kb/system-limits>` - Outlines the system limits which should be set or removed
- * :doc:`Run Scylla as a custom user:group </kb/custom-user>` - Configure the Scylla and supporting services to run as a custom user:group.
+ * :doc:`Run ScyllaDB as a custom user:group </kb/custom-user>` - Configure the ScyllaDB and supporting services to run as a custom user:group.
* :doc:`How to Set up a Swap Space Using a File </kb/set-up-swap>` - Outlines the steps you need to take to set up a swap space.


.. panel-box::
- :title: Scylla under the hood
+ :title: ScyllaDB under the hood
:id: "getting-started"
:class: my-panel

- * :doc:`Gossip in Scylla </kb/gossip>` - Scylla, like Cassandra, uses a type of protocol called “gossip” to exchange metadata about the identities of nodes in a cluster. Here's how it works behind the scenes.
- * :doc:`Scylla consistency quiz for administrators </kb/quiz-administrators>` - How much do you know about NoSQL, from the administrator point of view?
- * :doc:`Scylla Memory Usage </kb/memory-usage>` - Short explanation how Scylla manages memory
- * :doc:`Scylla Nodes are Unresponsive </kb/unresponsive-nodes>` - How to handle swap in Scylla
+ * :doc:`Gossip in ScyllaDB </kb/gossip>` - ScyllaDB, like Cassandra, uses a type of protocol called “gossip” to exchange metadata about the identities of nodes in a cluster. Here's how it works behind the scenes.
+ * :doc:`ScyllaDB consistency quiz for administrators </kb/quiz-administrators>` - How much do you know about NoSQL, from the administrator point of view?
+ * :doc:`ScyllaDB Memory Usage </kb/memory-usage>` - Short explanation how ScyllaDB manages memory
+ * :doc:`ScyllaDB Nodes are Unresponsive </kb/unresponsive-nodes>` - How to handle swap in ScyllaDB
* :doc:`CQL Query Does Not Display Entire Result Set </kb/cqlsh-more>` - What to do when a CQL query doesn't display the entire result set.
* :doc:`Snapshots and Disk Utilization </kb/disk-utilization>` - How snapshots affect disk utilization
- * :doc:`Scylla Snapshots </kb/snapshots>` - What Scylla snapshots are, what they are used for, and how they get created and removed.
- * :doc:`How does Scylla LWT Differ from Apache Cassandra ? </kb/lwt-differences>` - How does Scylla's implementation of lightweight transactions differ from Apache Cassandra?
+ * :doc:`ScyllaDB Snapshots </kb/snapshots>` - What ScyllaDB snapshots are, what they are used for, and how they get created and removed.
+ * :doc:`How does ScyllaDB LWT Differ from Apache Cassandra ? </kb/lwt-differences>` - How does ScyllaDB's implementation of lightweight transactions differ from Apache Cassandra?
* :doc:`If a query does not reveal enough results </kb/cqlsh-results>`
* :doc:`How to Change gc_grace_seconds for a Table </kb/gc-grace-seconds>` - How to change the ``gc_grace_seconds`` parameter and prevent data resurrection.
* :doc:`How to flush old tombstones from a table </kb/tombstones-flush>` - How to remove old tombstones from SSTables.
* :doc:`Increase Cache to Avoid Non-paged Queries </kb/increase-permission-cache>` - How to increase the ``permissions_cache_max_entries`` setting.
* :doc:`How to Safely Increase the Replication Factor </kb/rf-increase>`
* :doc:`Facts about TTL, Compaction, and gc_grace_seconds <ttl-facts>`

- **Note**: The KB article for social readers has been *removed*. Instead, please look at lessons on `Scylla University <https://university.scylladb.com/>`_ or the `Care Pet example <https://care-pet.docs.scylladb.com/master/>`_
+ **Note**: The KB article for social readers has been *removed*. Instead, please look at lessons on `ScyllaDB University <https://university.scylladb.com/>`_ or the `Care Pet example <https://care-pet.docs.scylladb.com/master/>`_


.. panel-box::
- :title: Configuring and Integrating Scylla
+ :title: Configuring and Integrating ScyllaDB
:id: "getting-started"
:class: my-panel

- * :doc:`NTP configuration for Scylla </kb/ntp>` - Scylla depends on an accurate system clock. Learn to configure NTP for your data store and applications.
- * :doc:`Scylla and Spark integration </kb/scylla-and-spark-integration>` - How to run an example Spark application that uses Scylla to store data?
- * :doc:`Map CPUs to Scylla Shards </kb/map-cpu>` - Mapping between CPUs and Scylla shards
+ * :doc:`NTP configuration for ScyllaDB </kb/ntp>` - ScyllaDB depends on an accurate system clock. Learn to configure NTP for your data store and applications.
+ * :doc:`ScyllaDB and Spark integration </kb/scylla-and-spark-integration>` - How to run an example Spark application that uses ScyllaDB to store data?
+ * :doc:`Map CPUs to ScyllaDB Shards </kb/map-cpu>` - Mapping between CPUs and ScyllaDB shards
* :doc:`Customizing CPUSET </kb/customizing-cpuset>`
* :doc:`Recreate RAID devices </kb/raid-device>` - How to recreate your RAID devices without running scylla-setup
- * :doc:`Configure Scylla Networking with Multiple NIC/IP Combinations </kb/yaml-address>` - examples for setting the different IP addresses in scylla.yaml
+ * :doc:`Configure ScyllaDB Networking with Multiple NIC/IP Combinations </kb/yaml-address>` - examples for setting the different IP addresses in scylla.yaml
* :doc:`Updating the Mode in perftune.yaml After a ScyllaDB Upgrade </kb/perftune-modes-sync>`
* :doc:`Kafka Sink Connector Quickstart </using-scylla/integrations/kafka-connector>`
* :doc:`Kafka Sink Connector Configuration </using-scylla/integrations/sink-config>`


.. panel-box::
- :title: Analyzing Scylla
+ :title: Analyzing ScyllaDB
:id: "getting-started"
:class: my-panel

- * :doc:`Using the perf utility with Scylla </kb/use-perf>` - Using the perf utility to analyze Scylla
+ * :doc:`Using the perf utility with ScyllaDB </kb/use-perf>` - Using the perf utility to analyze ScyllaDB
* :doc:`Debug your database with Flame Graphs </kb/flamegraph>` - How to setup and run a Flame Graph
- * :doc:`Decoding Stack Traces </kb/decode-stack-trace>` - How to decode stack traces in Scylla Logs
+ * :doc:`Decoding Stack Traces </kb/decode-stack-trace>` - How to decode stack traces in ScyllaDB Logs
* :doc:`Counting all rows in a table </kb/count-all-rows>` - Why counting all rows in a table often leads to a timeout


diff --git a/docs/kb/lwt-differences.rst b/docs/kb/lwt-differences.rst
--- a/docs/kb/lwt-differences.rst
+++ b/docs/kb/lwt-differences.rst
@@ -1,16 +1,16 @@
-==================================================
-How does Scylla LWT Differ from Apache Cassandra ?
-==================================================
+====================================================
+How does ScyllaDB LWT Differ from Apache Cassandra ?
+====================================================

-Scylla is making an effort to be compatible with Cassandra, down to the level of limitations of the implementation.
+ScyllaDB is making an effort to be compatible with Cassandra, down to the level of limitations of the implementation.
How is it different?

-* Scylla most commonly uses fewer rounds than Cassandra to complete a lightweight transaction. While Cassandra issues a separate read query to fetch the old record, scylla piggybacks the read result on the response to the prepare round.
-* Scylla will automatically use synchronous commit log write mode for all lightweight transaction writes. Before a lightweight transaction completes, scylla will ensure that the data in it has hit the device. This is done in all commitlog_sync modes.
-* Conditional statements return a result set, and unlike Cassandra, Scylla result set metadata doesn’t change from execution to execution: Scylla always returns the old version of the row, regardless of whether the condition is true or not. This ensures conditional statements work well with prepared statements.
+* ScyllaDB most commonly uses fewer rounds than Cassandra to complete a lightweight transaction. While Cassandra issues a separate read query to fetch the old record, scylla piggybacks the read result on the response to the prepare round.
+* ScyllaDB will automatically use synchronous commit log write mode for all lightweight transaction writes. Before a lightweight transaction completes, scylla will ensure that the data in it has hit the device. This is done in all commitlog_sync modes.
+* Conditional statements return a result set, and unlike Cassandra, ScyllaDB result set metadata doesn’t change from execution to execution: ScyllaDB always returns the old version of the row, regardless of whether the condition is true or not. This ensures conditional statements work well with prepared statements.
* For batch statement, the returned result set contains an old row for every conditional statement in the batch, in statement order. Cassandra returns results in clustering key order.
-* Unlike Cassandra, Scylla uses per-core data partitioning, so the RPC that is done to perform a transaction talks directly to the right core on a peer replica, avoiding the concurrency overhead. This is, of course, true, if Scylla’s own shard-aware driver is used - otherwise we add an extra hop to the right core at the coordinator node.
-* Scylla does not store hints for lightweight transaction writes, since this is redundant as all such writes are already present in system.paxos table.
+* Unlike Cassandra, ScyllaDB uses per-core data partitioning, so the RPC that is done to perform a transaction talks directly to the right core on a peer replica, avoiding the concurrency overhead. This is, of course, true, if ScyllaDB’s own shard-aware driver is used - otherwise we add an extra hop to the right core at the coordinator node.
+* ScyllaDB does not store hints for lightweight transaction writes, since this is redundant as all such writes are already present in system.paxos table.


More on :doc:`Lightweight Transactions (LWT) </using-scylla/lwt>`
diff --git a/docs/kb/map-cpu.rst b/docs/kb/map-cpu.rst
--- a/docs/kb/map-cpu.rst
+++ b/docs/kb/map-cpu.rst
@@ -1,12 +1,12 @@
-==========================
-Map CPUs to Scylla Shards
-==========================
+===========================
+Map CPUs to ScyllaDB Shards
+===========================

-Due to its thread-per-core architecture, many things within Scylla can be better understood when you look at it on a per-CPU basis. There are Linux tools such as ``top`` and ``perf`` that can give information about what is happening within a CPU, given a CPU number.
+Due to its thread-per-core architecture, many things within ScyllaDB can be better understood when you look at it on a per-CPU basis. There are Linux tools such as ``top`` and ``perf`` that can give information about what is happening within a CPU, given a CPU number.

-A common mistake users make is to assume that there is a direct and predictable relationship between the Scylla Shard ID and the CPU ID, which is not true.
+A common mistake users make is to assume that there is a direct and predictable relationship between the ScyllaDB Shard ID and the CPU ID, which is not true.

-Starting in version 3.0, Scylla ships with a script to let users know about the mapping between CPUs and Scylla Shards. For users of older versions, a copy of the script can be downloaded from the `Seastar git tree <https://raw.githubusercontent.com/scylladb/seastar/master/scripts/seastar-cpu-map.sh>`_.
+Starting in version 3.0, ScyllaDB ships with a script to let users know about the mapping between CPUs and ScyllaDB Shards. For users of older versions, a copy of the script can be downloaded from the `Seastar git tree <https://raw.githubusercontent.com/scylladb/seastar/master/scripts/seastar-cpu-map.sh>`_.

Examples of usage
------------------
diff --git a/docs/kb/memory-usage.rst b/docs/kb/memory-usage.rst
--- a/docs/kb/memory-usage.rst
+++ b/docs/kb/memory-usage.rst
@@ -1,14 +1,14 @@
-Scylla Memory Usage
-===================
+ScyllaDB Memory Usage
+=====================

-Scylla memory usage might be larger than the data set used.
+ScyllaDB memory usage might be larger than the data set used.

For example:

-``The data size is 19GB, but Scylla uses 220G memory.``
+``The data size is 19GB, but ScyllaDB uses 220G memory.``


-Scylla uses available memory to cache your data. Scylla knows how to dynamically manage memory for optimal performance, for example, if many clients connect to Scylla, it will evict some data from the cache to make room for these connections, when the connection count drops again, this memory is returned to the cache.
+ScyllaDB uses available memory to cache your data. ScyllaDB knows how to dynamically manage memory for optimal performance, for example, if many clients connect to ScyllaDB, it will evict some data from the cache to make room for these connections, when the connection count drops again, this memory is returned to the cache.

To limit the memory usage you can start scylla with ``--memory`` parameter.
Alternatively, you can specify the amount of memory ScyllaDB should leave to the OS with ``--reserve-memory`` parameter. Keep in mind that the amount of memory left to the operating system needs to suffice external scylla modules, such as ``scylla-jmx``, which runs on top of JVM.
diff --git a/docs/kb/ntp.rst b/docs/kb/ntp.rst
--- a/docs/kb/ntp.rst
+++ b/docs/kb/ntp.rst
@@ -1,12 +1,12 @@
-NTP Configuration for Scylla
-============================
+NTP Configuration for ScyllaDB
+==============================
**Topic: System administration**

-**Learn: How to configure time synchronization for Scylla**
+**Learn: How to configure time synchronization for ScyllaDB**

-**Audience: Scylla and Apache Cassandra administrators**
+**Audience: ScyllaDB and Apache Cassandra administrators**

-Apache Cassandra and Scylla depend on an accurate system clock. Kyle Kingsbury,
+Apache Cassandra and ScyllaDB depend on an accurate system clock. Kyle Kingsbury,
author of the ``jepsen`` distributed systems testing tool,
`writes <https://aphyr.com/posts/299-the-trouble-with-timestamps>`_,

diff --git a/docs/kb/posix.rst b/docs/kb/posix.rst
--- a/docs/kb/posix.rst
+++ b/docs/kb/posix.rst
@@ -1,12 +1,12 @@
-POSIX networking for Scylla
-===========================
+POSIX networking for ScyllaDB
+=============================
**Topic: Planning and setup**

-**Learn: How to configure POSIX networking for Scylla**
+**Learn: How to configure POSIX networking for ScyllaDB**

**Audience: Developers, devops, integration testers**

-The Seastar framework used in Scylla can support two networking modes.
+The Seastar framework used in ScyllaDB can support two networking modes.
For high-performance production workloads, use the Data Plane
Development Kit (DPDK) for maximum performance on specific modern
network hardware.
@@ -29,8 +29,8 @@ Firewall Configuration
For a single node, the firewall will need to be set up to allow TCP on
the following :ref:`ports <cqlsh-networking>`.

-Scylla Configuration
---------------------
+ScyllaDB Configuration
+----------------------

POSIX mode is the default, in ``/etc/sysconfig/scylla-server``. Check
that ``NETWORK_MODE`` is set to ``posix``.
diff --git a/docs/kb/quiz-administrators.rst b/docs/kb/quiz-administrators.rst
--- a/docs/kb/quiz-administrators.rst
+++ b/docs/kb/quiz-administrators.rst
@@ -1,10 +1,10 @@
-Scylla consistency quiz for administrators
-==========================================
+ScyllaDB consistency quiz for administrators
+============================================
**Topic: Architecture and development**

-**Learn: Understanding consistency in Scylla: a quiz**
+**Learn: Understanding consistency in ScyllaDB: a quiz**

-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**

Q: When you run ``nodetool decommission`` to remove a node…

diff --git a/docs/kb/raid-device.rst b/docs/kb/raid-device.rst
--- a/docs/kb/raid-device.rst
+++ b/docs/kb/raid-device.rst
@@ -2,7 +2,7 @@
Recreate RAID devices
=====================

-Scylla creates a RAID device on all storage devices assigned to it as part of Scylla setup. However, there are situations in which we want to redo just this step, without invoking the entire setup phase again. One example of such a situation is when Scylla is used in Clouds with ephemeral storage. After a hard stop, the storage devices will be reset and the previous setup will be destroyed.
+ScyllaDB creates a RAID device on all storage devices assigned to it as part of ScyllaDB setup. However, there are situations in which we want to redo just this step, without invoking the entire setup phase again. One example of such a situation is when ScyllaDB is used in Clouds with ephemeral storage. After a hard stop, the storage devices will be reset and the previous setup will be destroyed.
To recreate your RAID devices, run this script:

.. code-block:: shell
diff --git a/docs/kb/rf-increase.rst b/docs/kb/rf-increase.rst
--- a/docs/kb/rf-increase.rst
+++ b/docs/kb/rf-increase.rst
@@ -6,7 +6,7 @@ How to Safely Increase the Replication Factor
**Topic: What can happen when you increase RF**


-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**


Issue
diff --git a/docs/kb/scylla-and-spark-integration.rst b/docs/kb/scylla-and-spark-integration.rst
--- a/docs/kb/scylla-and-spark-integration.rst
+++ b/docs/kb/scylla-and-spark-integration.rst
@@ -1,29 +1,29 @@
-Scylla and Spark integration
-============================
+ScyllaDB and Spark integration
+==============================


-Simple Scylla-Spark integration example
----------------------------------------
+Simple ScyllaDB-Spark integration example
+-----------------------------------------

This is an example of how to create a very simple Spark application that
-uses Scylla to store its data. The application is going to read people's
+uses ScyllaDB to store its data. The application is going to read people's
names and ages from one table and write the names of the adults to
another one. It also will show the number of adults and all people in
the database.

Prerequisites
~~~~~~~~~~~~~

-- Scylla
+- ScyllaDB
- sbt

-Prepare Scylla
-~~~~~~~~~~~~~~
+Prepare ScyllaDB
+~~~~~~~~~~~~~~~~

Firstly, we need to create keyspace and tables in which data processed
by the example application will be stored.

-Launch Scylla and connect to it using cqlsh. The following commands will
+Launch ScyllaDB and connect to it using cqlsh. The following commands will
create a new keyspace for our tests and make it the current one.

::
@@ -105,7 +105,7 @@ the actual logic of the application. Create file
}

Since we don't want to hardcode in our application any information about
-Scylla or Spark we will also need an additional configuration file
+ScyllaDB or Spark we will also need an additional configuration file
``spark-scylla.conf``.

::
@@ -130,7 +130,7 @@ Download and run Spark
The next step is to get Spark running. Pre-built binaries can be
downloaded from `this <http://spark.apache.org/downloads.html>`__
website. Make sure to choose release 1.5.0. Since we are going to use it
-with Scylla Hadoop version doesn't matter.
+with ScyllaDB Hadoop version doesn't matter.

Once the download has finished, unpack the archive and in its root
directory, execute the following command to start Spark Master:
@@ -153,9 +153,9 @@ command:
Run application
~~~~~~~~~~~~~~~

-The application is built, Spark is up, and Scylla has all the necessary
+The application is built, Spark is up, and ScyllaDB has all the necessary
tables created and contains the input data for our example. This means
-that we are ready to run the application. Make sure that Scylla is
+that we are ready to run the application. Make sure that ScyllaDB is
running and execute (still in the Spark directory) the following
command):

@@ -172,7 +172,7 @@ them, there should be a message from the application:
Adults: 5
Total: 7

-You can also connect to Scylla with cqlsh, and using the following query,
+You can also connect to ScyllaDB with cqlsh, and using the following query,
see the results of our example in the database.

::
@@ -200,12 +200,12 @@ RoadTrip example

This is a short guide explaining how to run a Spark example application
available `here <https://github.com/jsebrien/spark-tests>`__ with
-Scylla.
+ScyllaDB.

Prerequisites
~~~~~~~~~~~~~

-- Scylla
+- ScyllaDB
- Maven
- Git

@@ -247,7 +247,7 @@ Update connector

spark-tests use Spark Cassandra Connector in version 1.1.0 which is too
old for our purposes. Before 1.3.0 the connector used to use Thrift as
-well CQL and that won't work with Scylla. Updating the example isn't
+well CQL and that won't work with ScyllaDB. Updating the example isn't
very complicated and can be accomplished by applying the following
patch:

@@ -321,11 +321,11 @@ The example can be built with Maven:

mvn compile

-Start Scylla
-~~~~~~~~~~~~
+Start ScyllaDB
+~~~~~~~~~~~~~~

-The application we are trying to run will try to connect with Scylla
-using custom port 9142. That's why when starting Scylla, an additional
+The application we are trying to run will try to connect with ScyllaDB
+using custom port 9142. That's why when starting ScyllaDB, an additional
flag is needed to make sure that's the port it listens on
(alternatively, you can change all occurrences of 9142 to 9042 in the
example source code).
@@ -337,21 +337,21 @@ example source code).
Run the application
~~~~~~~~~~~~~~~~~~~

-With the example compiled and Scylla running all that is left to be done
+With the example compiled and ScyllaDB running all that is left to be done
is to actually run the application:

::

mvn exec:java

-Scylla limitations
-------------------
+ScyllaDB limitations
+--------------------

-- Scylla needs Spark Cassandra Connector 1.3.0 or later.
-- Scylla doesn't populate ``system.size_estimates``, and therefore the
+- ScyllaDB needs Spark Cassandra Connector 1.3.0 or later.
+- ScyllaDB doesn't populate ``system.size_estimates``, and therefore the
connector won't be able to perform automatic split sizing optimally.

-For more compatibility information check `Scylla status <http://www.scylladb.com/technology/status/>`_
+For more compatibility information check `ScyllaDB status <http://www.scylladb.com/technology/status/>`_

:doc:`Knowledge Base </kb/index>`

diff --git a/docs/kb/scylla-limits-systemd.rst b/docs/kb/scylla-limits-systemd.rst
--- a/docs/kb/scylla-limits-systemd.rst
+++ b/docs/kb/scylla-limits-systemd.rst
@@ -1,28 +1,28 @@
-============================================
-Increase Scylla resource limits over systemd
-============================================
+==============================================
+Increase ScyllaDB resource limits over systemd
+==============================================

-**Topic: Increasing resource limits when Scylla runs and is managed via systemd**
+**Topic: Increasing resource limits when ScyllaDB runs and is managed via systemd**

-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**



Issue
-----

-Updates to ``/etc/security/limits.d/scylla.conf`` do not have any effect. After a cluster rolling restart is completed, the Scylla limits listed under ``/proc/<PID>/limits`` are still the same or lower than what has been configured.
+Updates to ``/etc/security/limits.d/scylla.conf`` do not have any effect. After a cluster rolling restart is completed, the ScyllaDB limits listed under ``/proc/<PID>/limits`` are still the same or lower than what has been configured.

Root Cause
----------

-When running under systemd, Scylla enforces the **LimitNOFILE** and **LimitNPROC** values under ``/lib/systemd/system/scylla-server.service``, where:
+When running under systemd, ScyllaDB enforces the **LimitNOFILE** and **LimitNPROC** values under ``/lib/systemd/system/scylla-server.service``, where:

**LimitNOFILE** - Maximum number of file descriptors allowed to be opened simultaneously (defaults to 800000)

**LimitNPROC** - Maximum number of processes allowed to run in parallel (defaults to 8096)

-Even though Scylla's provided defaults are suitable for most workloads, there may be situations on which these values may need to be overridden.
+Even though ScyllaDB's provided defaults are suitable for most workloads, there may be situations on which these values may need to be overridden.

Before you start
----------------
@@ -31,7 +31,7 @@ The Linux kernel imposes an upper limit on the maximum number of file-handles th

The ``fs.nr_open`` parameter default value is 1048576 (1024*1024) and it must be increased whenever it is required to overcome such limit.

-As a rule of thumb, always ensure that the value of ``fs.nr_open`` is **equal or greater than** the maximum number of file-handles that Scylla may be able to consume.
+As a rule of thumb, always ensure that the value of ``fs.nr_open`` is **equal or greater than** the maximum number of file-handles that ScyllaDB may be able to consume.

1. To check the value of ``fs.nr_open`` run:

@@ -58,7 +58,7 @@ As a rule of thumb, always ensure that the value of ``fs.nr_open`` is **equal or
Solution
--------

-1. To override Scylla limits on systemd, run:
+1. To override ScyllaDB limits on systemd, run:

.. code-block:: shell

@@ -73,15 +73,15 @@ Solution
[Service]
LimitNOFILE=5000000

-3. Restart Scylla:
+3. Restart ScyllaDB:

.. code-block:: shell

sudo systemctl restart scylla-server.service

-This will create a configuration file named ``override.conf`` under the ``/etc/systemd/system/scylla-server.service.d`` folder. Whenever editing this file by hand manually, remember to run ``sudo systemctl daemon-reload`` before restarting Scylla, so that systemd reloads the changes.
+This will create a configuration file named ``override.conf`` under the ``/etc/systemd/system/scylla-server.service.d`` folder. Whenever editing this file by hand manually, remember to run ``sudo systemctl daemon-reload`` before restarting ScyllaDB, so that systemd reloads the changes.

-4. To check the updated limits allowed by the Scylla process run:
+4. To check the updated limits allowed by the ScyllaDB process run:

.. code-block:: shell

diff --git a/docs/kb/seed-nodes.rst b/docs/kb/seed-nodes.rst
--- a/docs/kb/seed-nodes.rst
+++ b/docs/kb/seed-nodes.rst
@@ -1,21 +1,21 @@
-=================
-Scylla Seed Nodes
-=================
+===================
+ScyllaDB Seed Nodes
+===================

-**Topic: Scylla Seed Nodes Overview**
+**Topic: ScyllaDB Seed Nodes Overview**

-**Learn: What a seed node is, and how they should be used in a Scylla Cluster**
+**Learn: What a seed node is, and how they should be used in a ScyllaDB Cluster**

-**Audience: Scylla Administrators**
+**Audience: ScyllaDB Administrators**


-What is the Function of a Seed Node in Scylla?
-----------------------------------------------
+What is the Function of a Seed Node in ScyllaDB?
+------------------------------------------------

.. note::
- Seed nodes function was changed in Scylla Open Source 4.3 and Scylla Enterprise 2021.1; if you are running an older version, see :ref:`Older Version Of Scylla <seeds-older-versions>`.
+ Seed nodes function was changed in ScyllaDB Open Source 4.3 and ScyllaDB Enterprise 2021.1; if you are running an older version, see :ref:`Older Version Of ScyllaDB <seeds-older-versions>`.

-A Scylla seed node is a node specified with the ``seeds`` configuration parameter in ``scylla.yaml``. It is used by new node joining as the first contact point.
+A ScyllaDB seed node is a node specified with the ``seeds`` configuration parameter in ``scylla.yaml``. It is used by new node joining as the first contact point.
It allows nodes to discover the cluster ring topology on startup (when joining the cluster). This means that any time a node is joining the cluster, it needs to learn the cluster ring topology, meaning:

- What are the IPs of the nodes in the cluster are
@@ -28,15 +28,15 @@ The first node in a new cluster needs to be a seed node.

.. _seeds-older-versions:

-Older Version Of Scylla
-----------------------------
+Older Version Of ScyllaDB
+-------------------------

-In Scylla releases older than Scylla Open Source 4.3 and Scylla Enterprise 2021.1, seed node has one more function: it assists with :doc:`gossip </kb/gossip>` convergence.
+In ScyllaDB releases older than ScyllaDB Open Source 4.3 and ScyllaDB Enterprise 2021.1, seed node has one more function: it assists with :doc:`gossip </kb/gossip>` convergence.
Gossiping with other nodes ensures that any update to the cluster is propagated across the cluster. This includes detecting and alerting whenever a node goes down, comes back, or is removed from the cluster.

This functions was removed, as described in `Seedless NoSQL: Getting Rid of Seed Nodes in ScyllaDB <https://www.scylladb.com/2020/09/22/seedless-nosql-getting-rid-of-seed-nodes-in-scylla/>`_.

-If you run an older Scylla release, we recommend upgrading to version 4.3 (Scylla Open Source) or 2021.1 (Scylla Enterprise) or later. If you choose to run an older version, it is good practice to follow these guidelines:
+If you run an older ScyllaDB release, we recommend upgrading to version 4.3 (ScyllaDB Open Source) or 2021.1 (ScyllaDB Enterprise) or later. If you choose to run an older version, it is good practice to follow these guidelines:

* The first node in a new cluster needs to be a seed node.
* Ensure that all nodes in the cluster have the same seed nodes listed in each node's scylla.yaml.
diff --git a/docs/kb/set-up-swap.rst b/docs/kb/set-up-swap.rst
--- a/docs/kb/set-up-swap.rst
+++ b/docs/kb/set-up-swap.rst
@@ -15,7 +15,7 @@ The help for the script can be accessed by ``scylla_swap_setup --help``.
scylla_swap_setup --help
usage: scylla_swap_setup [-h] [--swap-directory SWAP_DIRECTORY] [--swap-size SWAP_SIZE]

- Configure swap for Scylla.
+ Configure swap for ScyllaDB.

optional arguments:
-h, --help show this help message and exit
@@ -116,5 +116,5 @@ Remove a Swap File
Additional Information
----------------------

-* `Configure swap for Scylla <https://github.com/scylladb/scylla/blob/master/dist/common/scripts/scylla_swap_setup>`_
+* `Configure swap for ScyllaDB <https://github.com/scylladb/scylla/blob/master/dist/common/scripts/scylla_swap_setup>`_
* :doc:`Setup Scripts </getting-started/system-configuration>`.
diff --git a/docs/kb/snapshots.rst b/docs/kb/snapshots.rst
--- a/docs/kb/snapshots.rst
+++ b/docs/kb/snapshots.rst
@@ -1,37 +1,37 @@
-================
-Scylla Snapshots
-================
+==================
+ScyllaDB Snapshots
+==================

.. your title should be something customers will search for.

**Topic: snapshots**

.. Give a subtopic for the title (User Management, Security, Drivers, Automation, Optimization, Schema management, Data Modeling, etc.)

-**Learn: What are Scylla snapshots? What are they used for? How do they get created and removed?**
+**Learn: What are ScyllaDB snapshots? What are they used for? How do they get created and removed?**


-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**

-.. Choose (Application Developer, Scylla Administrator, Internal, All)
+.. Choose (Application Developer, ScyllaDB Administrator, Internal, All)

Synopsis
--------

-Snapshots in Scylla are an essential part of the :doc:`backup and restore mechanism </operating-scylla/procedures/backup-restore/index>`. Whereas in other databases a backup starts with creating a copy of a data file (cold backup, hot backup, shadow copy backup), in Scylla the process starts with creating a table or keyspace snapshot. The snapshots are created either automatically (this is described further in this article) or by invoking the :doc:`nodetool snapshot </operating-scylla/nodetool-commands/snapshot>` command.
+Snapshots in ScyllaDB are an essential part of the :doc:`backup and restore mechanism </operating-scylla/procedures/backup-restore/index>`. Whereas in other databases a backup starts with creating a copy of a data file (cold backup, hot backup, shadow copy backup), in ScyllaDB the process starts with creating a table or keyspace snapshot. The snapshots are created either automatically (this is described further in this article) or by invoking the :doc:`nodetool snapshot </operating-scylla/nodetool-commands/snapshot>` command.
To prevent any issues with restoring your data, the backup strategy must include saving copies of the snapshots on a secondary storage. This makes sure the snapshot is available to restore if the primary storage fails.

-.. note:: If you come from RDBMS background you should not confuse snapshots with the notion of materialized views (as they are sometimes called snapshots in that area of technology). With Scylla, snapshots are `hard links <https://en.wikipedia.org/wiki/Hard_link>`_ to data files. :doc:`Materialized views </using-scylla/materialized-views>` do exist in Scylla but they are not called snapshots.
+.. note:: If you come from RDBMS background you should not confuse snapshots with the notion of materialized views (as they are sometimes called snapshots in that area of technology). With ScyllaDB, snapshots are `hard links <https://en.wikipedia.org/wiki/Hard_link>`_ to data files. :doc:`Materialized views </using-scylla/materialized-views>` do exist in ScyllaDB but they are not called snapshots.


How Snapshots Work
------------------

-Scylla, like Cassandra, requires `Unix-like storage <https://en.wikipedia.org/wiki/Unix_filesystem?>`_ (such is also a file system supported by Linux). As mentioned above, snapshots are hard links to SSTables on disk. It is important to understand that SSTables are immutable and as such are not re-written in the same file. When data in database changes and data is written to disk, it is written as a new file. The new files are consolidated following compaction, which merges table’s data into one or more SSTable files (depending on the compaction strategy).
+ScyllaDB, like Cassandra, requires `Unix-like storage <https://en.wikipedia.org/wiki/Unix_filesystem?>`_ (such is also a file system supported by Linux). As mentioned above, snapshots are hard links to SSTables on disk. It is important to understand that SSTables are immutable and as such are not re-written in the same file. When data in database changes and data is written to disk, it is written as a new file. The new files are consolidated following compaction, which merges table’s data into one or more SSTable files (depending on the compaction strategy).

If snapshots (hard links) were created to existing SSTables on disk they are preserved even if table data is eventually stored in one or more of the new SSTables. The :doc:`compaction process </cql/compaction>` removes files in the data directory, but the snapshot hard links **will still** be pointing to the **old files**. Only after all of the pointers are removed, the actual file is removed. If even one pointer exists, the file will remain. Therefore, even as the database is moving on, once the snapshot hard links are created, the content of the data files can be copied off to another storage and serve as the foundation for a table, keyspace, or entire database restore (on that node, as this backup and restore process is node specific).

-Apart from *planned backup* procedure described above, and as a safeguard from *accidental* loss of data, the Scylla database includes an optional creation of an automatic snapshot every time a table is dropped or truncated. As dropping a keyspace involves dropping tables within that keyspace, these actions will invoke auto snapshots as well. This option is enabled out of the box and is controlled by the auto_snapshot flag in the ``/etc/scylla/scylla.yaml`` configuration file. Note that a keyspace cannot be truncated. It can only be dropped. A table, on the other hand, can be either truncated or dropped. The data in a table can also be deleted, which is different from being truncated.
+Apart from *planned backup* procedure described above, and as a safeguard from *accidental* loss of data, the ScyllaDB database includes an optional creation of an automatic snapshot every time a table is dropped or truncated. As dropping a keyspace involves dropping tables within that keyspace, these actions will invoke auto snapshots as well. This option is enabled out of the box and is controlled by the auto_snapshot flag in the ``/etc/scylla/scylla.yaml`` configuration file. Note that a keyspace cannot be truncated. It can only be dropped. A table, on the other hand, can be either truncated or dropped. The data in a table can also be deleted, which is different from being truncated.

The default setting for the ``auto_snapshot`` flag in ``/etc/scylla/scylla.yaml`` file is ``true``. It is **not** recommended to set it to ``false``, unless there is a good backup and recovery strategy in place.

diff --git a/docs/kb/static-columns.rst b/docs/kb/static-columns.rst
--- a/docs/kb/static-columns.rst
+++ b/docs/kb/static-columns.rst
@@ -1,6 +1,6 @@
-Scylla payload sent duplicated static columns
-=============================================
-Scylla payload, which refers to the actual network packets transferred from the Scylla server to the client as a result of a query, contains duplicate static columns.
+ScyllaDB payload sent duplicated static columns
+===============================================
+ScyllaDB payload, which refers to the actual network packets transferred from the ScyllaDB server to the client as a result of a query, contains duplicate static columns.

Issue description
-----------------
diff --git a/docs/kb/tombstones-flush.rst b/docs/kb/tombstones-flush.rst
--- a/docs/kb/tombstones-flush.rst
+++ b/docs/kb/tombstones-flush.rst
@@ -7,22 +7,22 @@ How to flush old tombstones from a table
Description
-----------
If you have large partitions with lots of tombstones, you can use this workaround to flush the old tombstones.
-To avoid data resurrection, make sure that tables are repaired (either with nodetool repair or Scylla Manager) before the ``gc_grace_seconds`` threshold is reached.
+To avoid data resurrection, make sure that tables are repaired (either with nodetool repair or ScyllaDB Manager) before the ``gc_grace_seconds`` threshold is reached.
After the repair finishes, any tombstone older than the previous repair can be flushed.

.. note:: Use :doc:`this article </troubleshooting/large-partition-table/>` to help you find large partitions.

Steps:
^^^^^^
-1. Run nodetool repair to synchronize the data between nodes. Alternatively, you can use Scylla Manager to run a repair.
+1. Run nodetool repair to synchronize the data between nodes. Alternatively, you can use ScyllaDB Manager to run a repair.

.. code-block:: sh

nodetool repair <options>;

2. Set the ``gc_grace_seconds`` to the time since last repair was started - For instance, if the last repair was executed one day ago, then set ``gc_grace_seconds`` to one day (86400sec). For more information, please refer to :doc:`this KB article </kb/gc-grace-seconds/>`.

-.. note:: To prevent the compaction of unsynched tombstones, it is important to get the timing correctly. If you are not sure what time should set, please contact `Scylla support <https://www.scylladb.com/product/support/>`_.
+.. note:: To prevent the compaction of unsynched tombstones, it is important to get the timing correctly. If you are not sure what time should set, please contact `ScyllaDB support <https://www.scylladb.com/product/support/>`_.

.. code-block:: sh

diff --git a/docs/kb/unresponsive-nodes.rst b/docs/kb/unresponsive-nodes.rst
--- a/docs/kb/unresponsive-nodes.rst
+++ b/docs/kb/unresponsive-nodes.rst
@@ -1,12 +1,12 @@
-==============================
-Scylla Nodes are Unresponsive
-==============================
+===============================
+ScyllaDB Nodes are Unresponsive
+===============================

**Topic: Performance Analysis**

**Issue:**

-Scylla nodes are unresponsive. They are shown as down, and I can't even establish new SSH connections to the cluster. The existing connections are slow.
+ScyllaDB nodes are unresponsive. They are shown as down, and I can't even establish new SSH connections to the cluster. The existing connections are slow.

**Environment: All**

@@ -15,20 +15,20 @@ Scylla nodes are unresponsive. They are shown as down, and I can't even establis
Root Cause
----------

-When Scylla is reporting itself as down, this may mean a Scylla-specific issue. But when the node as a whole starts reporting slowness and even establishing SSH connections is hard, that usually indicates a node level issue.
+When ScyllaDB is reporting itself as down, this may mean a ScyllaDB-specific issue. But when the node as a whole starts reporting slowness and even establishing SSH connections is hard, that usually indicates a node level issue.

The most common cause is due to swap. There are two main situations we need to consider:

-* The system has swap configured. If the system needs to swap pages, it may swap the Scylla memory, and future access to that memory will be slow.
+* The system has swap configured. If the system needs to swap pages, it may swap the ScyllaDB memory, and future access to that memory will be slow.

-* The system does not have swap configured. In that case the kernel may go on a loop trying to free pages without being able to so, becoming a CPU-hog which eventually stalls the Scylla and other processes from executing.
+* The system does not have swap configured. In that case the kernel may go on a loop trying to free pages without being able to so, becoming a CPU-hog which eventually stalls the ScyllaDB and other processes from executing.



Resolution
----------

-1. Ideally, a healthy system should not swap. Scylla pre-allocates 93% of the memory by default, and never uses more than that. It leaves the remaining 7% of the memory for other tasks including the Operating System. Check with the ``top`` utility if there are other processes running which are consuming a lot of memory.
+1. Ideally, a healthy system should not swap. ScyllaDB pre-allocates 93% of the memory by default, and never uses more than that. It leaves the remaining 7% of the memory for other tasks including the Operating System. Check with the ``top`` utility if there are other processes running which are consuming a lot of memory.

* If there are other processes running but they are not essential, we recommend moving them to other machines.
* If there are other processes running and they are essential, the default reservation may not be enough. Change the reservation following the steps below.
diff --git a/docs/kb/update-pk.rst b/docs/kb/update-pk.rst
--- a/docs/kb/update-pk.rst
+++ b/docs/kb/update-pk.rst
@@ -2,11 +2,11 @@
Update a Primary Key
==============================

-**Topic: Can you Update a Primary Key in Scylla?**
+**Topic: Can you Update a Primary Key in ScyllaDB?**

-**Audience: Scylla administrators**
+**Audience: ScyllaDB administrators**

-In Scylla, you cannot update a primary key. It is impossible to do so.
+In ScyllaDB, you cannot update a primary key. It is impossible to do so.

However, you can migrate the data from the old table with the old primary key to a new table with a new primary key.
There are two ways to handle the migration:
diff --git a/docs/kb/use-perf.rst b/docs/kb/use-perf.rst
--- a/docs/kb/use-perf.rst
+++ b/docs/kb/use-perf.rst
@@ -1,17 +1,17 @@
-==================================
-Using the perf utility with Scylla
-==================================
+====================================
+Using the perf utility with ScyllaDB
+====================================

.. meta::
:title:
:description: Debugging or Diving into a Pegged Shard
:keywords: perf, pegged shard, list processes, analyze perf issue

-This article contains useful tips & tricks for using the `perf` utility with Scylla.
+This article contains useful tips & tricks for using the `perf` utility with ScyllaDB.
The `perf` utility is particularly useful when debugging a pegged shard.


-Due to its thread-per-core nature, looking at aggregates is rarely useful as it tends to hide bad behavior that is localized to specific CPUs. Looking at an individual CPU will make those anomalies easier to see. Once you notice that a Scylla shard requires investigation (for example, when the Scylla Monitor shard view shows that a particular shard is more loaded than others), you can use the ``seastar-cpu-map.sh`` script described :doc:`here </kb/map-cpu/>` to determine which Linux CPU hosts that Scylla shard. For example:
+Due to its thread-per-core nature, looking at aggregates is rarely useful as it tends to hide bad behavior that is localized to specific CPUs. Looking at an individual CPU will make those anomalies easier to see. Once you notice that a ScyllaDB shard requires investigation (for example, when the ScyllaDB Monitor shard view shows that a particular shard is more loaded than others), you can use the ``seastar-cpu-map.sh`` script described :doc:`here </kb/map-cpu/>` to determine which Linux CPU hosts that ScyllaDB shard. For example:
.. code-block:: bash

seastar-cpu-map.sh -n scylla -s 0
@@ -30,9 +30,9 @@ When is perf useful?

`Perf`` is most useful when the CPU being probed runs at 100% utilization so that you can identify large chunks of execution time used by particular functions.

-Note that due to polling, Scylla will easily drive CPUs to 100% even when it is not bottlenecked. It will spin (poll) for some time, waiting for new requests. It tends to show in the perf reports as functions related to polling having high CPU time.
+Note that due to polling, ScyllaDB will easily drive CPUs to 100% even when it is not bottlenecked. It will spin (poll) for some time, waiting for new requests. It tends to show in the perf reports as functions related to polling having high CPU time.

-Perf can also be a useful tool when you suspect that something that shouldn’t be running is running. One example is systems with very high ``reactor_utilization`` (indicating non-polling work), where the Linux view of ``system`` CPU utilization is also high. It indicates that the Linux Kernel, not Scylla, is the main user of the CPU, so additional investigation is needed.
+Perf can also be a useful tool when you suspect that something that shouldn’t be running is running. One example is systems with very high ``reactor_utilization`` (indicating non-polling work), where the Linux view of ``system`` CPU utilization is also high. It indicates that the Linux Kernel, not ScyllaDB, is the main user of the CPU, so additional investigation is needed.

perf top
--------
diff --git a/docs/kb/yaml-address.rst b/docs/kb/yaml-address.rst
--- a/docs/kb/yaml-address.rst
+++ b/docs/kb/yaml-address.rst
@@ -1,17 +1,17 @@

-Configure Scylla Networking with Multiple NIC/IP Combinations
-=============================================================
+Configure ScyllaDB Networking with Multiple NIC/IP Combinations
+===============================================================

There are many ways to configure IP addresses in scylla.yaml. Setting the IP addresses incorrectly, can yield less than optimal results. This article focuses on configuring the addresses which are vital to network communication.

This article contains examples of the different ways to configure networking in scylla.yaml. The entire scope for address configuration is in the :ref:`Admin guide <admin-address-configuration-in-scylla>`.

-As these values depend on a particular network configuration in your setup there are a few ways to configure the address parameters. In the examples below, we will provide instructions for the most common use cases (all in the resolution of a single Scylla node).
+As these values depend on a particular network configuration in your setup there are a few ways to configure the address parameters. In the examples below, we will provide instructions for the most common use cases (all in the resolution of a single ScyllaDB node).

1 NIC, 1 IP
-----------

-This is the case where a Scylla cluster is meant to operate in a single subnet with a single address space (no "public/internal IP"s).
+This is the case where a ScyllaDB cluster is meant to operate in a single subnet with a single address space (no "public/internal IP"s).

In this case:

@@ -67,5 +67,5 @@ In this case:
Additional References
---------------------

-:doc:`Administration Guide </operating-scylla/admin>` - User guide for Scylla Administration
+:doc:`Administration Guide </operating-scylla/admin>` - User guide for ScyllaDB Administration

diff --git a/docs/operating-scylla/_common/networking-ports.rst b/docs/operating-scylla/_common/networking-ports.rst
--- a/docs/operating-scylla/_common/networking-ports.rst
+++ b/docs/operating-scylla/_common/networking-ports.rst
@@ -16,7 +16,7 @@ Port Description Protocol
------ -------------------------------------------- --------
7199 JMX management TCP
------ -------------------------------------------- --------
-10000 Scylla REST API TCP
+10000 ScyllaDB REST API TCP
------ -------------------------------------------- --------
9180 Prometheus API TCP
------ -------------------------------------------- --------
diff --git a/docs/operating-scylla/_common/tools_index.rst b/docs/operating-scylla/_common/tools_index.rst
--- a/docs/operating-scylla/_common/tools_index.rst
+++ b/docs/operating-scylla/_common/tools_index.rst
@@ -1,21 +1,21 @@
-* :doc:`Nodetool Reference</operating-scylla/nodetool>` - Scylla commands for managing Scylla node or cluster using the command-line nodetool utility.
+* :doc:`Nodetool Reference</operating-scylla/nodetool>` - ScyllaDB commands for managing ScyllaDB node or cluster using the command-line nodetool utility.
* :doc:`CQLSh - the CQL shell</cql/cqlsh>`.
* :doc:`Admin REST API - ScyllaDB Node Admin API</operating-scylla/rest>`.
* :doc:`Tracing </using-scylla/tracing>` - a ScyllaDB tool for debugging and analyzing internal flows in the server.
-* :doc:`SSTableloader </operating-scylla/admin-tools/sstableloader>` - Bulk load the sstables found in the directory to a Scylla cluster
-* :doc:`Scylla SStable </operating-scylla/admin-tools/scylla-sstable>` - Validates and dumps the content of SStables, generates a histogram, dumps the content of the SStable index.
-* :doc:`Scylla Types </operating-scylla/admin-tools/scylla-types/>` - Examines raw values obtained from SStables, logs, coredumps, etc.
-* :doc:`cassandra-stress </operating-scylla/admin-tools/cassandra-stress/>` A tool for benchmarking and load testing a Scylla and Cassandra clusters.
+* :doc:`SSTableloader </operating-scylla/admin-tools/sstableloader>` - Bulk load the sstables found in the directory to a ScyllaDB cluster
+* :doc:`ScyllaDB SStable </operating-scylla/admin-tools/scylla-sstable>` - Validates and dumps the content of SStables, generates a histogram, dumps the content of the SStable index.
+* :doc:`ScyllaDB Types </operating-scylla/admin-tools/scylla-types/>` - Examines raw values obtained from SStables, logs, coredumps, etc.
+* :doc:`cassandra-stress </operating-scylla/admin-tools/cassandra-stress/>` A tool for benchmarking and load testing a ScyllaDB and Cassandra clusters.
* :doc:`SSTabledump </operating-scylla/admin-tools/sstabledump>`
* :doc:`SSTableMetadata </operating-scylla/admin-tools/sstablemetadata>`
* sstablelevelreset - Reset level to 0 on a selected set of SSTables that use LeveledCompactionStrategy (LCS).
* sstablerepairedset - Mark specific SSTables as repaired or unrepaired.
* `scyllatop <https://www.scylladb.com/2016/03/22/scyllatop/>`_ - A terminal base top-like tool for scylladb collectd/prometheus metrics.
-* :doc:`scylla_dev_mode_setup</getting-started/installation-common/dev-mod>` - run Scylla in Developer Mode.
+* :doc:`scylla_dev_mode_setup</getting-started/installation-common/dev-mod>` - run ScyllaDB in Developer Mode.
* :doc:`perftune</operating-scylla/admin-tools/perftune>` - performance configuration.
* :doc:`Reading mutation fragments</operating-scylla/admin-tools/select-from-mutation-fragments/>` - dump the underlying mutation data from tables.
* :doc:`Maintenance socket </operating-scylla/admin-tools/maintenance-socket/>` - a Unix domain socket for full-permission CQL connection.
-* :doc:`Maintenance mode </operating-scylla/admin-tools/maintenance-mode/>` - a mode for performing maintenance tasks on an offline Scylla node.
+* :doc:`Maintenance mode </operating-scylla/admin-tools/maintenance-mode/>` - a mode for performing maintenance tasks on an offline ScyllaDB node.


Run each tool with ``-h``, ``--help`` for full options description.
diff --git a/docs/operating-scylla/admin-tools/cassandra-stress.rst b/docs/operating-scylla/admin-tools/cassandra-stress.rst
--- a/docs/operating-scylla/admin-tools/cassandra-stress.rst
+++ b/docs/operating-scylla/admin-tools/cassandra-stress.rst
@@ -1,7 +1,7 @@
Cassandra Stress
================

-The cassandra-stress tool is used for benchmarking and load testing both Scylla and Cassandra clusters. The cassandra-stress tool also supports testing arbitrary CQL tables and queries to allow users to benchmark their data model.
+The cassandra-stress tool is used for benchmarking and load testing both ScyllaDB and Cassandra clusters. The cassandra-stress tool also supports testing arbitrary CQL tables and queries to allow users to benchmark their data model.

This documentation focuses on user mode as this allows the testing of your actual schema.

diff --git a/docs/operating-scylla/admin-tools/index.rst b/docs/operating-scylla/admin-tools/index.rst
--- a/docs/operating-scylla/admin-tools/index.rst
+++ b/docs/operating-scylla/admin-tools/index.rst
@@ -9,13 +9,13 @@ Admin Tools
CQLSh </cql/cqlsh>
Admin REST API </operating-scylla/rest>
Tracing </using-scylla/tracing>
- Scylla SStable </operating-scylla/admin-tools/scylla-sstable/>
- Scylla Types </operating-scylla/admin-tools/scylla-types/>
+ ScyllaDB SStable </operating-scylla/admin-tools/scylla-sstable/>
+ ScyllaDB Types </operating-scylla/admin-tools/scylla-types/>
sstableloader
cassandra-stress </operating-scylla/admin-tools/cassandra-stress/>
sstabledump
sstablemetadata
- Scylla Logs </getting-started/logging/>
+ ScyllaDB Logs </getting-started/logging/>
perftune
Virtual Tables </operating-scylla/admin-tools/virtual-tables/>
Reading mutation fragments </operating-scylla/admin-tools/select-from-mutation-fragments/>
@@ -29,4 +29,4 @@ Admin Tools

.. include:: /operating-scylla/_common/tools_index.rst

-The `Admin Procedures and Monitoring lesson <https://university.scylladb.com/courses/scylla-operations/lessons/admin-procedures-and-basic-monitoring/topic/admin-procedures-and-monitoring/>`_ on Scylla University provides more training and examples material on this subject.
+The `Admin Procedures and Monitoring lesson <https://university.scylladb.com/courses/scylla-operations/lessons/admin-procedures-and-basic-monitoring/topic/admin-procedures-and-monitoring/>`_ on ScyllaDB University provides more training and examples material on this subject.
diff --git a/docs/operating-scylla/admin-tools/scylla-sstable.rst b/docs/operating-scylla/admin-tools/scylla-sstable.rst
--- a/docs/operating-scylla/admin-tools/scylla-sstable.rst
+++ b/docs/operating-scylla/admin-tools/scylla-sstable.rst
@@ -1,5 +1,5 @@
-Scylla SStable
-==============
+ScyllaDB SStable
+================

Introduction
-------------
@@ -169,7 +169,7 @@ It is possible to filter the data to print via the ``--partitions`` or
``--partitions-file`` options. Both expect partition key values in the hexdump
format.

-Supports both a text and JSON output. The text output uses the built-in Scylla
+Supports both a text and JSON output. The text output uses the built-in ScyllaDB
printers, which are also used when logging mutation-related data structures.

The schema of the JSON output is the following:
@@ -479,7 +479,7 @@ The content is dumped in JSON, using the following schema:
dump-scylla-metadata
^^^^^^^^^^^^^^^^^^^^

-Dumps the content of the scylla-metadata component. Contains Scylla-specific
+Dumps the content of the scylla-metadata component. Contains ScyllaDB-specific
metadata about the data component. This component won't be present in SStables
produced by Apache Cassandra.

@@ -715,7 +715,7 @@ consume_sstable_start(sst)

* Part of the Consume API.
* Called on the start of each stable.
-* The parameter is of type `Scylla.sstable <scylla-sstable-type_>`_.
+* The parameter is of type `ScyllaDB.sstable <scylla-sstable-type_>`_.
* When SStables are merged (``--merge``), the parameter is ``nil``.

Returns whether to stop. If ``true``, `consume_sstable_end() <scylla-consume-sstable-end-method_>`_ is called, skipping the content of the sstable (or that of the entire stream if ``--merge`` is used). If ``false``, consumption follows with the content of the sstable.
@@ -726,31 +726,31 @@ consume_partition_start(ps)
"""""""""""""""""""""""""""

* Part of the Consume API. Called on the start of each partition.
-* The parameter is of type `Scylla.partition_start <scylla-partition-start-type_>`_.
+* The parameter is of type `ScyllaDB.partition_start <scylla-partition-start-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, skipping the content of the partition. If ``false``, consumption follows with the content of the partition.

consume_static_row(sr)
""""""""""""""""""""""

* Part of the Consume API.
* Called if the partition has a static row.
-* The parameter is of type `Scylla.static_row <scylla-static-row-type_>`_.
+* The parameter is of type `ScyllaDB.static_row <scylla-static-row-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, and the remaining content of the partition is skipped. If ``false``, consumption follows with the remaining content of the partition.

consume_clustering_row(cr)
""""""""""""""""""""""""""

* Part of the Consume API.
* Called for each clustering row.
-* The parameter is of type `Scylla.clustering_row <scylla-clustering-row-type_>`_.
+* The parameter is of type `ScyllaDB.clustering_row <scylla-clustering-row-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, the remaining content of the partition is skipped. If ``false``, consumption follows with the remaining content of the partition.

consume_range_tombstone_change(crt)
"""""""""""""""""""""""""""""""""""

* Part of the Consume API.
* Called for each range tombstone change.
-* The parameter is of type `Scylla.range_tombstone_change <scylla-range-tombstone-change-type_>`_.
+* The parameter is of type `ScyllaDB.range_tombstone_change <scylla-range-tombstone-change-type_>`_.
* Returns whether to stop. If ``true``, `consume_partition_end() <scylla-consume-partition-end-method_>`_ is called, the remaining content of the partition is skipped. If ``false``, consumption follows with the remaining content of the partition.

.. _scylla-consume-partition-end-method:
@@ -779,8 +779,8 @@ consume_stream_end()
* Part of the Consume API.
* Called at the very end of the stream.

-Scylla LUA API
-~~~~~~~~~~~~~~
+ScyllaDB LUA API
+~~~~~~~~~~~~~~~~

In addition to the `ScyllaDB Consume API <scylla-consume-api_>`_, the Lua bindings expose various types and methods that allow you to work with ScyllaDB types and values.
The listing uses the following terminology:
@@ -807,8 +807,8 @@ Magic methods have their signature defined by Lua and so that is not described h

.. _scylla-atomic-cell-type:

-Scylla.atomic_cell
-""""""""""""""""""
+ScyllaDB.atomic_cell
+""""""""""""""""""""

Attributes:

@@ -817,14 +817,14 @@ Attributes:
* type (string) - one of: ``regular``, ``counter-update``, ``counter-shards``, ``frozen-collection`` or ``collection``.
* has_ttl (boolean) - is the cell expiring?
* ttl (integer) - time to live in seconds, ``nil`` if cell is not expiring.
-* expiry (`Scylla.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which cell expires, ``nil`` if cell is not expiring.
-* deletion_time (`Scylla.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which cell was deleted, ``nil`` unless cell is dead or expiring.
+* expiry (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which cell expires, ``nil`` if cell is not expiring.
+* deletion_time (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which cell was deleted, ``nil`` unless cell is dead or expiring.
* value:

- ``nil`` if cell is dead.
- appropriate Lua native type if type == ``regular``.
- integer if type == ``counter-update``.
- - `Scylla.counter_shards_value <scylla-counter-shards-value-type_>`_ if type == ``counter-shards``.
+ - `ScyllaDB.counter_shards_value <scylla-counter-shards-value-type_>`_ if type == ``counter-shards``.

A counter-shard table has the following keys:

@@ -834,63 +834,63 @@ A counter-shard table has the following keys:

.. _scylla-clustering-key-type:

-Scylla.clustering_key
-"""""""""""""""""""""
+ScyllaDB.clustering_key
+"""""""""""""""""""""""

Attributes:

-* components (table) - the column values (`Scylla.data_value <scylla-data-value-type_>`_) making up the composite clustering key.
+* components (table) - the column values (`ScyllaDB.data_value <scylla-data-value-type_>`_) making up the composite clustering key.

Methods:

* to_hex - convert the key to its serialized format, encoded in hex.

Magic methods:

-* __tostring - can be converted to string with tostring(), uses the built-in operator<< in Scylla.
+* __tostring - can be converted to string with tostring(), uses the built-in operator<< in ScyllaDB.

.. _scylla-clustering-row-type:

-Scylla.clustering_row
-"""""""""""""""""""""
+ScyllaDB.clustering_row
+"""""""""""""""""""""""

Attributes:

* key ($TYPE) - the clustering key's value as the appropriate Lua native type.
-* tombstone (`Scylla.tombstone <scylla-tombstone-type_>`_) - row tombstone, ``nil`` if no tombstone.
-* shadowable_tombstone (`Scylla.tombstone <scylla-tombstone-type_>`_) - shadowable tombstone of the row tombstone, ``nil`` if no tombstone.
-* marker (`Scylla.row_marker <scylla-row-marker-type_>`_) - the row marker, ``nil`` if row doesn't have one.
-* cells (table) - table of cells, where keys are the column names and the values are either of type `Scylla.atomic_cell <scylla-atomic-cell-type_>`_ or `Scylla.collection <scylla-collection-type_>`_.
+* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - row tombstone, ``nil`` if no tombstone.
+* shadowable_tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - shadowable tombstone of the row tombstone, ``nil`` if no tombstone.
+* marker (`ScyllaDB.row_marker <scylla-row-marker-type_>`_) - the row marker, ``nil`` if row doesn't have one.
+* cells (table) - table of cells, where keys are the column names and the values are either of type `ScyllaDB.atomic_cell <scylla-atomic-cell-type_>`_ or `ScyllaDB.collection <scylla-collection-type_>`_.

See also:

-* `Scylla.unserialize_clustering_key() <scylla-unserialize-clustering-key-method_>`_.
+* `ScyllaDB.unserialize_clustering_key() <scylla-unserialize-clustering-key-method_>`_.

.. _scylla-collection-type:

-Scylla.collection
-"""""""""""""""""
+ScyllaDB.collection
+"""""""""""""""""""

Attributes:

* type (string) - always ``collection`` for collection.
-* tombstone (`Scylla.tombstone <scylla-tombstone-type_>`_) - ``nil`` if no tombstone.
-* cells (table) - the collection cells, each collection cell is a table, with a ``key`` and ``value`` attribute. The key entry is the key of the collection cell for actual collections (list, set and map) and is of type `Scylla.data-value <scylla-data-value-type_>`_. For tuples and UDT this is just an empty string. The value entry is the value of the collection cell and is of type `Scylla.atomic-cell <scylla-atomic-cell-type_>`_.
+* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - ``nil`` if no tombstone.
+* cells (table) - the collection cells, each collection cell is a table, with a ``key`` and ``value`` attribute. The key entry is the key of the collection cell for actual collections (list, set and map) and is of type `ScyllaDB.data-value <scylla-data-value-type_>`_. For tuples and UDT this is just an empty string. The value entry is the value of the collection cell and is of type `ScyllaDB.atomic-cell <scylla-atomic-cell-type_>`_.

.. _scylla-collection-cell-value-type:

-Scylla.collection_cell_value
-""""""""""""""""""""""""""""
+ScyllaDB.collection_cell_value
+""""""""""""""""""""""""""""""

Attributes:

* key (sstring) - collection cell key in human readable form.
-* value (`Scylla.atomic_cell <scylla-atomic-cell-type_>`_) - collection cell value.
+* value (`ScyllaDB.atomic_cell <scylla-atomic-cell-type_>`_) - collection cell value.

.. _scylla-column-definition-type:

-Scylla.column_definition
-""""""""""""""""""""""""
+ScyllaDB.column_definition
+""""""""""""""""""""""""""

Attributes:

@@ -900,8 +900,8 @@ Attributes:

.. _scylla-counter-shards-value-type:

-Scylla.counter_shards_value
-"""""""""""""""""""""""""""
+ScyllaDB.counter_shards_value
+"""""""""""""""""""""""""""""

Attributes:

@@ -918,8 +918,8 @@ Magic methods:

.. _scylla-data-value-type:

-Scylla.data_value
-"""""""""""""""""
+ScyllaDB.data_value
+"""""""""""""""""""

Attributes:

@@ -931,8 +931,8 @@ Magic methods:

.. _scylla-gc-clock-time-point-type:

-Scylla.gc_clock_time_point
-""""""""""""""""""""""""""
+ScyllaDB.gc_clock_time_point
+""""""""""""""""""""""""""""

A time point belonging to the gc_clock, in UTC.

@@ -954,13 +954,13 @@ Magic methods:

See also:

-* `Scylla.now() <scylla-now-method_>`_.
-* `Scylla.time_point_from_string() <scylla-time-point-from-string-method_>`_.
+* `ScyllaDB.now() <scylla-now-method_>`_.
+* `ScyllaDB.time_point_from_string() <scylla-time-point-from-string-method_>`_.

.. _scylla-json-writer-type:

-Scylla.json_writer
-""""""""""""""""""
+ScyllaDB.json_writer
+""""""""""""""""""""

A JSON writer object, with both low-level and high-level APIs.
The low-level API allows you to write custom JSON and it loosely follows the API of `rapidjson::Writer <https://rapidjson.org/classrapidjson_1_1_writer.html>`_ (upon which it is implemented).
@@ -993,92 +993,92 @@ High level API Methods:

.. _scylla-new-json-writer-method:

-Scylla.new_json_writer()
-""""""""""""""""""""""""
+ScyllaDB.new_json_writer()
+""""""""""""""""""""""""""

-Create a `Scylla.json_writer <scylla-json-writer-type_>`_ instance.
+Create a `ScyllaDB.json_writer <scylla-json-writer-type_>`_ instance.

.. _scylla-new-position-in-partition-method:

-Scylla.new_position_in_partition()
-""""""""""""""""""""""""""""""""""
+ScyllaDB.new_position_in_partition()
+""""""""""""""""""""""""""""""""""""

-Creates a `Scylla.position_in_partition <scylla-position-in-partition-type_>`_ instance.
+Creates a `ScyllaDB.position_in_partition <scylla-position-in-partition-type_>`_ instance.

Arguments:

* weight (integer) - the weight of the key.
-* key (`Scylla.clustering_key <scylla-clustering-key-type_>`_) - the clustering key, optional.
+* key (`ScyllaDB.clustering_key <scylla-clustering-key-type_>`_) - the clustering key, optional.

.. _scylla-new-ring-position-method:

-Scylla.new_ring_position()
-""""""""""""""""""""""""""
+ScyllaDB.new_ring_position()
+""""""""""""""""""""""""""""

-Creates a `Scylla.ring_position <scylla-ring-position-type_>`_ instance.
+Creates a `ScyllaDB.ring_position <scylla-ring-position-type_>`_ instance.

Has several overloads:

-* ``Scylla.new_ring_position(weight, key)``.
-* ``Scylla.new_ring_position(weight, token)``.
-* ``Scylla.new_ring_position(weight, key, token)``.
+* ``ScyllaDB.new_ring_position(weight, key)``.
+* ``ScyllaDB.new_ring_position(weight, token)``.
+* ``ScyllaDB.new_ring_position(weight, key, token)``.

Where:

* weight (integer) - the weight of the key.
-* key (`Scylla.partition_key <scylla-partition-key-type_>`_) - the partition key.
+* key (`ScyllaDB.partition_key <scylla-partition-key-type_>`_) - the partition key.
* token (integer) - the token (of the key if a key is provided).

.. _scylla-now-method:

-Scylla.now()
-""""""""""""
+ScyllaDB.now()
+""""""""""""""

-Create a `Scylla.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_ instance, representing the current time.
+Create a `ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_ instance, representing the current time.

.. _scylla-partition-key-type:

-Scylla.partition_key
-""""""""""""""""""""
+ScyllaDB.partition_key
+""""""""""""""""""""""

Attributes:

-* components (table) - the column values (`Scylla.data_value <scylla-data-value-type_>`_) making up the composite partition key.
+* components (table) - the column values (`ScyllaDB.data_value <scylla-data-value-type_>`_) making up the composite partition key.

Methods:

* to_hex - convert the key to its serialized format, encoded in hex.

Magic methods:

-* __tostring - can be converted to string with tostring(), uses the built-in operator<< in Scylla.
+* __tostring - can be converted to string with tostring(), uses the built-in operator<< in ScyllaDB.

See also:

-* :ref:`Scylla.unserialize_partition_key() <scylla-unserialize-partition-key-method>`.
-* :ref:`Scylla.token_of() <scylla-token-of-method>`.
+* :ref:`ScyllaDB.unserialize_partition_key() <scylla-unserialize-partition-key-method>`.
+* :ref:`ScyllaDB.token_of() <scylla-token-of-method>`.

.. _scylla-partition-start-type:

-Scylla.partition_start
-""""""""""""""""""""""
+ScyllaDB.partition_start
+""""""""""""""""""""""""

Attributes:

* key - the partition key's value as the appropriate Lua native type.
* token (integer) - the partition key's token.
-* tombstone (`Scylla.tombstone <scylla-tombstone-type_>`_) - the partition tombstone, ``nil`` if no tombstone.
+* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - the partition tombstone, ``nil`` if no tombstone.

.. _scylla-position-in-partition-type:

-Scylla.position_in_partition
-""""""""""""""""""""""""""""
+ScyllaDB.position_in_partition
+""""""""""""""""""""""""""""""

Currently used only for clustering positions.

Attributes:

-* key (`Scylla.clustering_key <scylla-clustering-key-type_>`_) - the clustering key, ``nil`` if the position in partition represents the min or max clustering positions.
+* key (`ScyllaDB.clustering_key <scylla-clustering-key-type_>`_) - the clustering key, ``nil`` if the position in partition represents the min or max clustering positions.
* weight (integer) - weight of the position, either -1 (before key), 0 (at key) or 1 (after key). If key attribute is ``nil``, the weight is never 0.

Methods:
@@ -1087,28 +1087,28 @@ Methods:

See also:

-* `Scylla.new_position_in_partition() <scylla-new-position-in-partition-method_>`_.
+* `ScyllaDB.new_position_in_partition() <scylla-new-position-in-partition-method_>`_.

.. _scylla-range-tombstone-change-type:

-Scylla.range_tombstone_change
-"""""""""""""""""""""""""""""
+ScyllaDB.range_tombstone_change
+"""""""""""""""""""""""""""""""

Attributes:

* key ($TYPE) - the clustering key's value as the appropriate Lua native type.
* key_weight (integer) - weight of the position, either -1 (before key), 0 (at key) or 1 (after key).
-* tombstone (`Scylla.tombstone <scylla-tombstone-type_>`_) - tombstone, ``nil`` if no tombstone.
+* tombstone (`ScyllaDB.tombstone <scylla-tombstone-type_>`_) - tombstone, ``nil`` if no tombstone.

.. _scylla-ring-position-type:

-Scylla.ring_position
-""""""""""""""""""""
+ScyllaDB.ring_position
+""""""""""""""""""""""

Attributes:

* token (integer) - the token, ``nil`` if the ring position represents the min or max ring positions.
-* key (`Scylla.partition_key <scylla-partition-key-type_>`_) - the partition key, ``nil`` if the ring position represents a position before/after a token.
+* key (`ScyllaDB.partition_key <scylla-partition-key-type_>`_) - the partition key, ``nil`` if the ring position represents a position before/after a token.
* weight (integer) - weight of the position, either -1 (before key/token), 0 (at key) or 1 (after key/token). If key attribute is ``nil``, the weight is never 0.

Methods:
@@ -1117,93 +1117,93 @@ Methods:

See also:

-* `Scylla.new_ring_position() <scylla-new-ring-position-method_>`_.
+* `ScyllaDB.new_ring_position() <scylla-new-ring-position-method_>`_.

.. _scylla-row-marker-type:

-Scylla.row_marker
-"""""""""""""""""
+ScyllaDB.row_marker
+"""""""""""""""""""

Attributes:

* timestamp (integer).
* is_live (boolean) - is the marker live?
* has_ttl (boolean) - is the marker expiring?
* ttl (integer) - time to live in seconds, ``nil`` if marker is not expiring.
-* expiry (`Scylla.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which marker expires, ``nil`` if marker is not expiring.
-* deletion_time (`Scylla.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which marker was deleted, ``nil`` unless marker is dead or expiring.
+* expiry (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which marker expires, ``nil`` if marker is not expiring.
+* deletion_time (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - time at which marker was deleted, ``nil`` unless marker is dead or expiring.

.. _scylla-schema-type:

-Scylla.schema
-"""""""""""""
+ScyllaDB.schema
+"""""""""""""""

Attributes:

-* partition_key_columns (table) - list of `Scylla.column_definition <scylla-column-definition-type_>`_ of the key columns making up the partition key.
-* clustering_key_columns (table) - list of `Scylla.column_definition <scylla-column-definition-type_>`_ of the key columns making up the clustering key.
-* static_columns (table) - list of `Scylla.column_definition <scylla-column-definition-type_>`_ of the static columns.
-* regular_columns (table) - list of `Scylla.column_definition <scylla-column-definition-type_>`_ of the regular columns.
-* all_columns (table) - list of `Scylla.column_definition <scylla-column-definition-type_>`_ of all columns.
+* partition_key_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the key columns making up the partition key.
+* clustering_key_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the key columns making up the clustering key.
+* static_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the static columns.
+* regular_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of the regular columns.
+* all_columns (table) - list of `ScyllaDB.column_definition <scylla-column-definition-type_>`_ of all columns.

.. _scylla-sstable-type:

-Scylla.sstable
-""""""""""""""
+ScyllaDB.sstable
+""""""""""""""""

Attributes:

* filename (string) - the full path of the sstable Data component file;

.. _scylla-static-row-type:

-Scylla.static_row
-"""""""""""""""""
+ScyllaDB.static_row
+"""""""""""""""""""

Attributes:

-* cells (table) - table of cells, where keys are the column names and the values are either of type `Scylla.atomic_cell <scylla-atomic-cell-type_>`_ or `Scylla.collection <scylla-collection-type_>`_.
+* cells (table) - table of cells, where keys are the column names and the values are either of type `ScyllaDB.atomic_cell <scylla-atomic-cell-type_>`_ or `ScyllaDB.collection <scylla-collection-type_>`_.

.. _scylla-time-point-from-string-method:

-Scylla.time_point_from_string()
-"""""""""""""""""""""""""""""""
+ScyllaDB.time_point_from_string()
+"""""""""""""""""""""""""""""""""

-Create a `Scylla.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_ instance from the passed in string.
+Create a `ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_ instance from the passed in string.
Argument is string, using the same format as the CQL timestamp type, see https://en.wikipedia.org/wiki/ISO_8601.

.. _scylla-token-of-method:

-Scylla.token_of()
-"""""""""""""""""
+ScyllaDB.token_of()
+"""""""""""""""""""

-Compute and return the token (integer) for a `Scylla.partition_key <scylla-partition-key-type_>`_.
+Compute and return the token (integer) for a `ScyllaDB.partition_key <scylla-partition-key-type_>`_.

.. _scylla-tombstone-type:

-Scylla.tombstone
-""""""""""""""""
+ScyllaDB.tombstone
+""""""""""""""""""

Attributes:

* timestamp (integer)
-* deletion_time (`Scylla.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - the point in time at which the tombstone was deleted.
+* deletion_time (`ScyllaDB.gc_clock_time_point <scylla-gc-clock-time-point-type_>`_) - the point in time at which the tombstone was deleted.

.. _scylla-unserialize-clustering-key-method:

-Scylla.unserialize_clustering_key()
-"""""""""""""""""""""""""""""""""""
+ScyllaDB.unserialize_clustering_key()
+"""""""""""""""""""""""""""""""""""""

-Create a `Scylla.clustering_key <scylla-clustering-key-type_>`_ instance.
+Create a `ScyllaDB.clustering_key <scylla-clustering-key-type_>`_ instance.

Argument is a string representing serialized clustering key in hex format.

.. _scylla-unserialize-partition-key-method:

-Scylla.unserialize_partition_key()
-""""""""""""""""""""""""""""""""""
+ScyllaDB.unserialize_partition_key()
+""""""""""""""""""""""""""""""""""""

-Create a `Scylla.partition_key <scylla-partition-key-type_>`_ instance.
+Create a `ScyllaDB.partition_key <scylla-partition-key-type_>`_ instance.

Argument is a string representing serialized partition key in hex format.

diff --git a/docs/operating-scylla/admin-tools/scylla-types.rst b/docs/operating-scylla/admin-tools/scylla-types.rst
--- a/docs/operating-scylla/admin-tools/scylla-types.rst
+++ b/docs/operating-scylla/admin-tools/scylla-types.rst
@@ -1,4 +1,4 @@
-Scylla Types
+ScyllaDB Types
==============

Introduction
diff --git a/docs/operating-scylla/admin-tools/select-from-mutation-fragments.rst b/docs/operating-scylla/admin-tools/select-from-mutation-fragments.rst
--- a/docs/operating-scylla/admin-tools/select-from-mutation-fragments.rst
+++ b/docs/operating-scylla/admin-tools/select-from-mutation-fragments.rst
@@ -8,7 +8,7 @@ Reading mutation fragments

The ``SELECT * FROM MUTATION_FRAGMENTS()`` statement allows for reading the raw underlying mutations (data) from a table.
This is intended to be used as a diagnostics tool to debug performance or correctness issues, where inspecting the raw underlying data, as scylla stores it, is desired.
-So far this was only possible with sstables, using a tool like :doc:`Scylla SStable</operating-scylla/admin-tools/scylla-sstable>`.
+So far this was only possible with sstables, using a tool like :doc:`ScyllaDB SStable</operating-scylla/admin-tools/scylla-sstable>`.
This statement allows inspecting the content of the row-cache, as well as that of individual memtables, in addition to individual sstables.

The statement has to be used on an existing table, by using a regular ``SELECT`` query, which wraps the table name in ``MUTATION_FRAGMENTS()``. For example, to dump all mutations from ``my_keyspace.my_table``:
diff --git a/docs/operating-scylla/admin-tools/sstabledump.rst b/docs/operating-scylla/admin-tools/sstabledump.rst
--- a/docs/operating-scylla/admin-tools/sstabledump.rst
+++ b/docs/operating-scylla/admin-tools/sstabledump.rst
@@ -1,8 +1,8 @@
SSTabledump
============

-.. warning:: SSTabledump is deprecated since Scylla 5.4, and will be removed in a future release.
- Please consider switching to :doc:`Scylla SSTable </operating-scylla/admin-tools/scylla-sstable>`.
+.. warning:: SSTabledump is deprecated since ScyllaDB 5.4, and will be removed in a future release.
+ Please consider switching to :doc:`ScyllaDB SSTable </operating-scylla/admin-tools/scylla-sstable>`.

This tool allows you to converts SSTable into a JSON format file.
If you need more flexibility or want to dump more than just the data-component, see :doc:`scylla-sstable </operating-scylla/admin-tools/scylla-sstable>`.
diff --git a/docs/operating-scylla/admin-tools/sstablemetadata.rst b/docs/operating-scylla/admin-tools/sstablemetadata.rst
--- a/docs/operating-scylla/admin-tools/sstablemetadata.rst
+++ b/docs/operating-scylla/admin-tools/sstablemetadata.rst
@@ -1,7 +1,7 @@
SSTableMetadata
===============

-.. warning:: SSTableMetadata is deprecated since Scylla 5.4, and will be removed in a future release.
+.. warning:: SSTableMetadata is deprecated since ScyllaDB 5.4, and will be removed in a future release.
Please consider switching to :ref:`scylla sstable dump-statistics` and :ref:`scylla sstable dump-summary`.

SSTableMetadata prints metadata in ``Statistics.db`` and ``Summary.db`` about the specified SSTables to the console.
diff --git a/docs/operating-scylla/admin.rst b/docs/operating-scylla/admin.rst
--- a/docs/operating-scylla/admin.rst
+++ b/docs/operating-scylla/admin.rst
@@ -1,60 +1,60 @@
Administration Guide
********************

-For training material, also check out the `Admin Procedures lesson <https://university.scylladb.com/courses/scylla-operations/lessons/admin-procedures-and-basic-monitoring/>`_ on Scylla University.
+For training material, also check out the `Admin Procedures lesson <https://university.scylladb.com/courses/scylla-operations/lessons/admin-procedures-and-basic-monitoring/>`_ on ScyllaDB University.

System requirements
===================
-Make sure you have met the :doc:`System Requirements </getting-started/system-requirements>` before you install and configure Scylla.
+Make sure you have met the :doc:`System Requirements </getting-started/system-requirements>` before you install and configure ScyllaDB.

Download and Install
====================

-See the :doc:`getting started page </getting-started/index>` for info on installing Scylla on your platform.
+See the :doc:`getting started page </getting-started/index>` for info on installing ScyllaDB on your platform.


System configuration
====================
-See :ref:`System Configuration Guide <system-configuration-files-and-scripts>` for details on optimum OS settings for Scylla. (These settings are performed automatically in the Scylla packages, Docker containers, and Amazon AMIs.)
+See :ref:`System Configuration Guide <system-configuration-files-and-scripts>` for details on optimum OS settings for ScyllaDB. (These settings are performed automatically in the ScyllaDB packages, Docker containers, and Amazon AMIs.)

.. _admin-scylla-configuration:

-Scylla Configuration
-====================
-Scylla configuration files are:
+ScyllaDB Configuration
+======================
+ScyllaDB configuration files are:

+-------------------------------------------------------+---------------------------------+
| Installed location | Description |
+=======================================================+=================================+
| :code:`/etc/default/scylla-server` (Ubuntu/Debian) | Server startup options |
| :code:`/etc/sysconfig/scylla-server` (others) | |
+-------------------------------------------------------+---------------------------------+
-| :code:`/etc/scylla/scylla.yaml` | Main Scylla configuration file |
+| :code:`/etc/scylla/scylla.yaml` | Main ScyllaDB configuration file|
+-------------------------------------------------------+---------------------------------+
| :code:`/etc/scylla/cassandra-rackdc.properties` | Rack & dc configuration file |
+-------------------------------------------------------+---------------------------------+

.. _check-your-current-version-of-scylla:

-Check your current version of Scylla
-------------------------------------
-This command allows you to check your current version of Scylla. Note that this command is not the :doc:`nodetool version </operating-scylla/nodetool-commands/version>` command which reports the CQL version.
+Check your current version of ScyllaDB
+--------------------------------------
+This command allows you to check your current version of ScyllaDB. Note that this command is not the :doc:`nodetool version </operating-scylla/nodetool-commands/version>` command which reports the CQL version.
If you are looking for the CQL or Cassandra version, refer to the CQLSH reference for :ref:`SHOW VERSION <cqlsh-show-version>`.

.. code-block:: shell

scylla --version

-Output displays the Scylla version. Your results may differ.
+Output displays the ScyllaDB version. Your results may differ.

.. code-block:: shell

4.4.0-0.20210331.05c6a40f0

.. _admin-address-configuration-in-scylla:

-Address Configuration in Scylla
--------------------------------
+Address Configuration in ScyllaDB
+---------------------------------

The following addresses can be configured in scylla.yaml:

@@ -65,11 +65,11 @@ The following addresses can be configured in scylla.yaml:
* - Address Type
- Description
* - listen_address
- - Address Scylla listens for connections from other nodes. See storage_port and ssl_storage_ports.
+ - Address ScyllaDB listens for connections from other nodes. See storage_port and ssl_storage_ports.
* - rpc_address
- - Address on which Scylla is going to expect CQL client connections. See rpc_port, native_transport_port and native_transport_port_ssl in the :ref:`Networking <cqlsh-networking>` parameters.
+ - Address on which ScyllaDB is going to expect CQL client connections. See rpc_port, native_transport_port and native_transport_port_ssl in the :ref:`Networking <cqlsh-networking>` parameters.
* - broadcast_address
- - Address that is broadcasted to tell other Scylla nodes to connect to. Related to listen_address above.
+ - Address that is broadcasted to tell other ScyllaDB nodes to connect to. Related to listen_address above.
* - broadcast_rpc_address
- Address that is broadcasted to tell the clients to connect to. Related to rpc_address.
* - seeds
@@ -81,13 +81,13 @@ The following addresses can be configured in scylla.yaml:
* - prometheus_address
- Address for Prometheus queries. See prometheus_port in the :ref:`Networking <cqlsh-networking>` parameters and `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_ for more details.
* - replace_node_first_boot
- - Host ID of a dead node this Scylla node is replacing. Refer to :doc:`Replace a Dead Node in a Scylla Cluster </operating-scylla/procedures/cluster-management/replace-dead-node>` for more details.
+ - Host ID of a dead node this ScyllaDB node is replacing. Refer to :doc:`Replace a Dead Node in a ScyllaDB Cluster </operating-scylla/procedures/cluster-management/replace-dead-node>` for more details.

-.. note:: When the listen_address, rpc_address, broadcast_address, and broadcast_rpc_address parameters are not set correctly, Scylla does not work as expected.
+.. note:: When the listen_address, rpc_address, broadcast_address, and broadcast_rpc_address parameters are not set correctly, ScyllaDB does not work as expected.

scylla-server
-------------
-The :code:`scylla-server` file contains configuration related to starting up the Scylla server.
+The :code:`scylla-server` file contains configuration related to starting up the ScyllaDB server.

.. _admin-scylla.yaml:

@@ -98,7 +98,7 @@ The :code:`scylla-server` file contains configuration related to starting up the
Compression
-----------

-In Scylla, you can configure compression at rest and compression in transit.
+In ScyllaDB, you can configure compression at rest and compression in transit.
For compression in transit, you can configure compression between nodes or between the client and the node.


@@ -107,12 +107,12 @@ For compression in transit, you can configure compression between nodes or betwe
Client - Node Compression
^^^^^^^^^^^^^^^^^^^^^^^^^^

-Compression between the client and the node is set by the driver that the application is using to access Scylla.
+Compression between the client and the node is set by the driver that the application is using to access ScyllaDB.

For example:

-* `Scylla Python Driver <https://python-driver.docs.scylladb.com/master/api/cassandra/cluster.html#cassandra.cluster.Cluster.compression>`_
-* `Scylla Java Driver <https://github.com/scylladb/java-driver/tree/3.7.1-scylla/manual/compression>`_
+* `ScyllaDB Python Driver <https://python-driver.docs.scylladb.com/master/api/cassandra/cluster.html#cassandra.cluster.Cluster.compression>`_
+* `ScyllaDB Java Driver <https://github.com/scylladb/java-driver/tree/3.7.1-scylla/manual/compression>`_
* `Go Driver <https://godoc.org/github.com/gocql/gocql#Compressor>`_

Refer to the :doc:`Drivers Page </using-scylla/drivers/index>` for more drivers.
@@ -133,26 +133,26 @@ internode_compression controls whether traffic between nodes is compressed.
Configuring TLS/SSL in scylla.yaml
----------------------------------

-Scylla versions 1.1 and greater support encryption between nodes and between client and node. See the Scylla :doc:`Scylla TLS/SSL guide: </operating-scylla/security/index>` for configuration settings.
+ScyllaDB versions 1.1 and greater support encryption between nodes and between client and node. See the ScyllaDB :doc:`ScyllaDB TLS/SSL guide: </operating-scylla/security/index>` for configuration settings.

.. _cqlsh-networking:

Networking
----------

-The ScyllaDB ports are detailed in the table below. For ScyllaDB Manager ports, see the `Scylla Manager Documentation <https://manager.docs.scylladb.com/>`_.
+The ScyllaDB ports are detailed in the table below. For ScyllaDB Manager ports, see the `ScyllaDB Manager Documentation <https://manager.docs.scylladb.com/>`_.

.. image:: /operating-scylla/security/Scylla-Ports2.png

.. include:: /operating-scylla/_common/networking-ports.rst

All ports above need to be open to external clients (CQL), external admin systems (JMX), and other nodes (RPC). REST API port can be kept closed for incoming external connections.

-The JMX service, :code:`scylla-jmx`, runs on port 7199. It is required in order to manage Scylla using :code:`nodetool` and other Apache Cassandra-compatible utilities. The :code:`scylla-jmx` process must be able to connect to port 10000 on localhost. The JMX service listens for incoming JMX connections on all network interfaces on the system.
+The JMX service, :code:`scylla-jmx`, runs on port 7199. It is required in order to manage ScyllaDB using :code:`nodetool` and other Apache Cassandra-compatible utilities. The :code:`scylla-jmx` process must be able to connect to port 10000 on localhost. The JMX service listens for incoming JMX connections on all network interfaces on the system.

Advanced networking
-------------------
-It is possible that a client, or another node, may need to use a different IP address to connect to a Scylla node from the address that the node is listening on. This is the case when a node is behind port forwarding. Scylla allows for setting alternate IP addresses.
+It is possible that a client, or another node, may need to use a different IP address to connect to a ScyllaDB node from the address that the node is listening on. This is the case when a node is behind port forwarding. ScyllaDB allows for setting alternate IP addresses.

Do not set any IP address to :code:`0.0.0.0`.

@@ -164,13 +164,13 @@ Do not set any IP address to :code:`0.0.0.0`.
- Description
- Default
* - listen_address (required)
- - Address Scylla listens for connections from other nodes. See storage_port and ssl_storage_ports.
+ - Address ScyllaDB listens for connections from other nodes. See storage_port and ssl_storage_ports.
- No default
* - rpc_address (required)
- - Address on which Scylla is going to expect CQL clients connections. See rpc_port, native_transport_port and native_transport_port_ssl in the :ref:`Networking <cqlsh-networking>` parameters.
+ - Address on which ScyllaDB is going to expect CQL clients connections. See rpc_port, native_transport_port and native_transport_port_ssl in the :ref:`Networking <cqlsh-networking>` parameters.
- No default
* - broadcast_address
- - Address that is broadcasted to tell other Scylla nodes to connect to. Related to listen_address above.
+ - Address that is broadcasted to tell other ScyllaDB nodes to connect to. Related to listen_address above.
- listen_address
* - broadcast_rpc_address
- Address that is broadcasted to tell the clients to connect to. Related to rpc_address.
@@ -187,36 +187,36 @@ If clients can connect directly to :code:`rpc_address`, then :code:`broadcast_rp

Core dumps
----------
-On RHEL and CentOS, the `Automatic Bug Reporting Tool <https://abrt.readthedocs.io/en/latest/>`_ conflicts with Scylla coredump configuration. Remove it before installing Scylla: :code:`sudo yum remove -y abrt`
+On RHEL and CentOS, the `Automatic Bug Reporting Tool <https://abrt.readthedocs.io/en/latest/>`_ conflicts with ScyllaDB coredump configuration. Remove it before installing ScyllaDB: :code:`sudo yum remove -y abrt`

-Scylla places any core dumps in :code:`var/lib/scylla/coredump`. They are not visible with the :code:`coredumpctl` command. See the :doc:`System Configuration Guide </getting-started/system-configuration/>` for details on core dump configuration scripts. Check with Scylla support before sharing any core dump, as they may contain sensitive data.
+ScyllaDB places any core dumps in :code:`var/lib/scylla/coredump`. They are not visible with the :code:`coredumpctl` command. See the :doc:`System Configuration Guide </getting-started/system-configuration/>` for details on core dump configuration scripts. Check with ScyllaDB support before sharing any core dump, as they may contain sensitive data.

Schedule fstrim
===============

-Scylla sets up daily fstrim on the filesystem(s),
-containing your Scylla commitlog and data directory. This utility will
+ScyllaDB sets up daily fstrim on the filesystem(s),
+containing your ScyllaDB commitlog and data directory. This utility will
discard, or trim, any blocks no longer in use by the filesystem.

Experimental Features
=====================

-Scylla Open Source uses experimental flags to expose non-production-ready features safely. These features are not stable enough to be used in production, and their API will likely change, breaking backward or forward compatibility.
+ScyllaDB Open Source uses experimental flags to expose non-production-ready features safely. These features are not stable enough to be used in production, and their API will likely change, breaking backward or forward compatibility.

-In recent Scylla versions, these features are controlled by the ``experimental_features`` list in scylla.yaml, allowing one to choose which experimental to enable.
-For example, some of the experimental features in Scylla Open Source 4.5 are: ``udf``, ``alternator-streams`` and ``raft``.
+In recent ScyllaDB versions, these features are controlled by the ``experimental_features`` list in scylla.yaml, allowing one to choose which experimental to enable.
+For example, some of the experimental features in ScyllaDB Open Source 4.5 are: ``udf``, ``alternator-streams`` and ``raft``.
Use ``scylla --help`` to get the list of experimental features.

-Scylla Enterprise and Scylla Cloud do not officially support experimental Features.
+ScyllaDB Enterprise and ScyllaDB Cloud do not officially support experimental Features.

Monitoring
==========
-Scylla exposes interfaces for online monitoring, as described below.
+ScyllaDB exposes interfaces for online monitoring, as described below.

Monitoring Interfaces
---------------------

-`Scylla Monitoring Interfaces <https://monitoring.docs.scylladb.com/stable/reference/monitoring_apis.html>`_
+`ScyllaDB Monitoring Interfaces <https://monitoring.docs.scylladb.com/stable/reference/monitoring_apis.html>`_

Monitoring Stack
----------------
@@ -225,7 +225,7 @@ Monitoring Stack

JMX
---
-Scylla JMX is compatible with Apache Cassandra, exposing the relevant subset of MBeans.
+ScyllaDB JMX is compatible with Apache Cassandra, exposing the relevant subset of MBeans.

.. REST

@@ -234,7 +234,7 @@ Scylla JMX is compatible with Apache Cassandra, exposing the relevant subset of
Un-contents
-----------

-Scylla is designed for high performance before tuning, for fewer layers that interact in unpredictable ways, and to use better algorithms that do not require manual tuning. The following items are found in the manuals for other data stores but do not need to appear here.
+ScyllaDB is designed for high performance before tuning, for fewer layers that interact in unpredictable ways, and to use better algorithms that do not require manual tuning. The following items are found in the manuals for other data stores but do not need to appear here.

Configuration un-contents
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -258,9 +258,9 @@ Testing compaction and compression
* Purging gossip state on a node


-Help with Scylla
-================
-Contact `Support <https://www.scylladb.com/product/support/>`_, or visit the Scylla `Community <https://www.scylladb.com/open-source-community/>`_ page for peer support.
+Help with ScyllaDB
+==================
+Contact `Support <https://www.scylladb.com/product/support/>`_, or visit the ScyllaDB `Community <https://www.scylladb.com/open-source-community/>`_ page for peer support.

.. include:: /rst_include/apache-copyrights-index.rst

diff --git a/docs/operating-scylla/benchmarking-scylla.rst b/docs/operating-scylla/benchmarking-scylla.rst
--- a/docs/operating-scylla/benchmarking-scylla.rst
+++ b/docs/operating-scylla/benchmarking-scylla.rst
@@ -3,7 +3,7 @@ Benchmarking ScyllaDB
======================


-For more information on the best way to benchmark Scylla, check out our blog:
+For more information on the best way to benchmark ScyllaDB, check out our blog:

* `Best Practices for Benchmarking ScyllaDB <https://www.scylladb.com/2021/03/04/best-practices-for-benchmarking-scylla/>`_
* `How to Test and Benchmark Database Clusters <https://www.scylladb.com/2020/11/04/how-to-test-and-benchmark-database-clusters/>`_
diff --git a/docs/operating-scylla/diagnostics.rst b/docs/operating-scylla/diagnostics.rst
--- a/docs/operating-scylla/diagnostics.rst
+++ b/docs/operating-scylla/diagnostics.rst
@@ -94,8 +94,8 @@ ScyllaDB has various other tools, mainly to work with sstables.
If you are diagnosing a problem that is related to sstables misbehaving or being corrupt, you may find these useful:

* `sstabledump </operating-scylla/admin-tools/sstabledump/>`_
-* `Scylla SStable </operating-scylla/admin-tools/scylla-sstable/>`_
-* `Scylla Types </operating-scylla/admin-tools/scylla-types/>`_
+* `ScyllaDB SStable </operating-scylla/admin-tools/scylla-sstable/>`_
+* `ScyllaDB Types </operating-scylla/admin-tools/scylla-types/>`_

GDB
---
diff --git a/docs/operating-scylla/index.rst b/docs/operating-scylla/index.rst
--- a/docs/operating-scylla/index.rst
+++ b/docs/operating-scylla/index.rst
@@ -18,33 +18,33 @@ ScyllaDB for Administrators
diagnostics

.. panel-box::
- :title: Scylla Administration
+ :title: ScyllaDB Administration
:id: "getting-started"
:class: my-panel

- * :doc:`Scylla Administrator Guide </operating-scylla/admin/>` - Guide for Scylla Administration
- * :doc:`Upgrade Scylla </upgrade/index>` - Upgrade Procedures for all Scylla Products and Versions
- * :doc:`System Configuration </operating-scylla/system-configuration/index>` - Information on the Scylla configuration files
- * :doc:`Procedures </operating-scylla/procedures/index>` - Procedures to create, out-scale, down-scale, and backup Scylla clusters
- * :doc:`Scylla Security </operating-scylla/security/index>` - Procedures to secure, authenticate, and encrypt Scylla users and data
+ * :doc:`ScyllaDB Administrator Guide </operating-scylla/admin/>` - Guide for ScyllaDB Administration
+ * :doc:`Upgrade ScyllaDB </upgrade/index>` - Upgrade Procedures for all ScyllaDB Products and Versions
+ * :doc:`System Configuration </operating-scylla/system-configuration/index>` - Information on the ScyllaDB configuration files
+ * :doc:`Procedures </operating-scylla/procedures/index>` - Procedures to create, out-scale, down-scale, and backup ScyllaDB clusters
+ * :doc:`ScyllaDB Security </operating-scylla/security/index>` - Procedures to secure, authenticate, and encrypt ScyllaDB users and data

.. panel-box::
- :title: Scylla Tools
+ :title: ScyllaDB Tools
:id: "getting-started"
:class: my-panel

- * :doc:`Scylla Tools </operating-scylla/admin-tools/index>` - Tools for Administrating and integrating with Scylla
+ * :doc:`ScyllaDB Tools </operating-scylla/admin-tools/index>` - Tools for Administrating and integrating with ScyllaDB
* `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_ - Tool for cluster monitoring and alerting
- * `ScyllaDB Operator <https://operator.docs.scylladb.com>`_ - Tool to run Scylla on Kubernetes
+ * `ScyllaDB Operator <https://operator.docs.scylladb.com>`_ - Tool to run ScyllaDB on Kubernetes
* `ScyllaDB Manager <https://manager.docs.scylladb.com/>`_ - Tool for cluster administration and automation
- * :doc:`Scylla Logs </getting-started/logging/>`
+ * :doc:`ScyllaDB Logs </getting-started/logging/>`

.. panel-box::
:title: Benchmark Testing
:id: "getting-started"
:class: my-panel

- * :doc:`Benchmark Testing for Scylla </operating-scylla/benchmarking-scylla/>` - Information on benchmark tests you can conduct on Scylla
+ * :doc:`Benchmark Testing for ScyllaDB </operating-scylla/benchmarking-scylla/>` - Information on benchmark tests you can conduct on ScyllaDB

.. panel-box::
:title: Diagnostics
@@ -54,11 +54,11 @@ ScyllaDB for Administrators
* :doc:`Diagnostics tools </operating-scylla/diagnostics/>` - What tools are available for diagnosing problems with ScyllaDB

.. panel-box::
- :title: Learn More About Scylla
+ :title: Learn More About ScyllaDB
:id: "getting-started"
:class: my-panel

- * :doc:`Scylla Features </using-scylla/features>` - Feature list for Scylla Open Source and Scylla Enterprise
+ * :doc:`ScyllaDB Features </using-scylla/features>` - Feature list for ScyllaDB Open Source and ScyllaDB Enterprise



diff --git a/docs/operating-scylla/nodetool-commands/compact.rst b/docs/operating-scylla/nodetool-commands/compact.rst
--- a/docs/operating-scylla/nodetool-commands/compact.rst
+++ b/docs/operating-scylla/nodetool-commands/compact.rst
@@ -8,7 +8,7 @@ By default, major compaction runs on all the ``keyspaces`` and tables.
Major compactions will take all the SSTables for a column family and merge them into a **single SSTable per shard**.
If a keyspace is provided, the compaction will run on all of the tables within that keyspace. If one or more tables are provided as command-line arguments, the compaction will run only on those tables.

-.. caution:: It is always best to allow Scylla to automatically run minor compactions using a :doc:`compaction strategy </kb/compaction>`. Using Nodetool to run compaction can quickly exhaust all resources, increase operational costs, and take up valuable disk space. For this reason, major compactions should be avoided and are not recommended for any production system.
+.. caution:: It is always best to allow ScyllaDB to automatically run minor compactions using a :doc:`compaction strategy </kb/compaction>`. Using Nodetool to run compaction can quickly exhaust all resources, increase operational costs, and take up valuable disk space. For this reason, major compactions should be avoided and are not recommended for any production system.


Syntax
diff --git a/docs/operating-scylla/nodetool-commands/drain.rst b/docs/operating-scylla/nodetool-commands/drain.rst
--- a/docs/operating-scylla/nodetool-commands/drain.rst
+++ b/docs/operating-scylla/nodetool-commands/drain.rst
@@ -1,6 +1,6 @@
Nodetool drain
==============
-**drain** - Flushes all memtables from a node to the SSTables that are on the disk. Scylla stops listening for connections from the client and other nodes. You need to restart Scylla after running this command. This command is usually executed before upgrading a node to a new version or before any maintenance action is performed. When you want to simply flush memtables to disk, use the :doc:`nodetool flush </operating-scylla/nodetool-commands/flush/>` command.
+**drain** - Flushes all memtables from a node to the SSTables that are on the disk. ScyllaDB stops listening for connections from the client and other nodes. You need to restart ScyllaDB after running this command. This command is usually executed before upgrading a node to a new version or before any maintenance action is performed. When you want to simply flush memtables to disk, use the :doc:`nodetool flush </operating-scylla/nodetool-commands/flush/>` command.

For example:

diff --git a/docs/operating-scylla/nodetool-commands/getendpoints.rst b/docs/operating-scylla/nodetool-commands/getendpoints.rst
--- a/docs/operating-scylla/nodetool-commands/getendpoints.rst
+++ b/docs/operating-scylla/nodetool-commands/getendpoints.rst
@@ -30,7 +30,7 @@ For example:
nodetool getendpoints mykeyspace superheroes "peter:parker"


-Scylla does not support *getendpoints* for a partition key with a frozen UDT.
+ScyllaDB does not support *getendpoints* for a partition key with a frozen UDT.


.. include:: nodetool-index.rst
diff --git a/docs/operating-scylla/nodetool-commands/gettraceprobability.rst b/docs/operating-scylla/nodetool-commands/gettraceprobability.rst
--- a/docs/operating-scylla/nodetool-commands/gettraceprobability.rst
+++ b/docs/operating-scylla/nodetool-commands/gettraceprobability.rst
@@ -24,5 +24,5 @@ Additional Information
----------------------

* :doc:`settraceprobability </operating-scylla/nodetool-commands/settraceprobability/>` - Nodetool Reference
-* `CQL tracing in Scylla blog <https://www.scylladb.com/2016/08/04/cql-tracing/>`_
+* `CQL tracing in ScyllaDB blog <https://www.scylladb.com/2016/08/04/cql-tracing/>`_

diff --git a/docs/operating-scylla/nodetool-commands/info.rst b/docs/operating-scylla/nodetool-commands/info.rst
--- a/docs/operating-scylla/nodetool-commands/info.rst
+++ b/docs/operating-scylla/nodetool-commands/info.rst
@@ -83,7 +83,7 @@ Example output:
| | |
| | |
+-----------+------------------------------+
-| Heap |Not applicable with Scylla |
+| Heap |Not applicable with ScyllaDB |
| Memory | |
| (MB) | |
| | |
@@ -116,20 +116,20 @@ Example output:
| | |
| | |
+-----------+------------------------------+
-| Exceptions|Not applicable with Scylla |
+| Exceptions|Not applicable with ScyllaDB |
| | |
| | |
| | |
+-----------+------------------------------+
-| Key |Not applicable with Scylla |
+| Key |Not applicable with ScyllaDB |
| Cache | |
| | |
| | |
+-----------+------------------------------+
| Row |Row Cache usage |
| Cache | |
+-----------+------------------------------+
-| Counter |Not applicable with Scylla |
+| Counter |Not applicable with ScyllaDB |
| Cache | |
| | |
| | |
diff --git a/docs/operating-scylla/nodetool-commands/rebuild.rst b/docs/operating-scylla/nodetool-commands/rebuild.rst
--- a/docs/operating-scylla/nodetool-commands/rebuild.rst
+++ b/docs/operating-scylla/nodetool-commands/rebuild.rst
@@ -2,12 +2,12 @@ Nodetool rebuild
================

**rebuild** ``[<src-dc-name>]`` - This command rebuilds a node's data by streaming data from other nodes in the cluster (similarly to bootstrap).
-Rebuild operates on multiple nodes in a Scylla cluster. It streams data from a single source replica when rebuilding a token range. When executing the command, Scylla first figures out which ranges the local node (the one we want to rebuild) is responsible for. Then which node in the cluster contains the same ranges. Finally, Scylla streams the data to the local node.
+Rebuild operates on multiple nodes in a ScyllaDB cluster. It streams data from a single source replica when rebuilding a token range. When executing the command, ScyllaDB first figures out which ranges the local node (the one we want to rebuild) is responsible for. Then which node in the cluster contains the same ranges. Finally, ScyllaDB streams the data to the local node.

-When :doc:`adding a new data-center into an existing Scylla cluster </operating-scylla/procedures/cluster-management/add-dc-to-existing-dc/>` use the rebuild command.
+When :doc:`adding a new data-center into an existing ScyllaDB cluster </operating-scylla/procedures/cluster-management/add-dc-to-existing-dc/>` use the rebuild command.


-.. note:: The Scylla rebuild process continues to run in the background, even if the nodetool command is killed or interrupted.
+.. note:: The ScyllaDB rebuild process continues to run in the background, even if the nodetool command is killed or interrupted.


For Example:
diff --git a/docs/operating-scylla/nodetool-commands/refresh.rst b/docs/operating-scylla/nodetool-commands/refresh.rst
--- a/docs/operating-scylla/nodetool-commands/refresh.rst
+++ b/docs/operating-scylla/nodetool-commands/refresh.rst
@@ -8,7 +8,7 @@ Add the files to the upload directory, by default it is located under ``/var/lib
:doc:`Materialized Views (MV)</cql/mv/>` and :doc:`Secondary Indexes (SI)</cql/secondary-indexes/>` of the upload table, and if they exist, they are automatically updated. Uploading MV or SI SSTables is not required and will fail.


-.. note:: Scylla node will ignore the partitions in the sstables which are not assigned to this node. For example, if sstable are copied from a different node.
+.. note:: ScyllaDB node will ignore the partitions in the sstables which are not assigned to this node. For example, if sstable are copied from a different node.


Execute the ``nodetool refresh`` command
diff --git a/docs/operating-scylla/nodetool-commands/removenode.rst b/docs/operating-scylla/nodetool-commands/removenode.rst
--- a/docs/operating-scylla/nodetool-commands/removenode.rst
+++ b/docs/operating-scylla/nodetool-commands/removenode.rst
@@ -6,7 +6,7 @@ Nodetool removenode
Before using the command, make sure the node is permanently down and cannot be recovered.

If the node is up and reachable by other nodes, use ``nodetool decommission``.
- See :doc:`Remove a Node from a Scylla Cluster </operating-scylla/procedures/cluster-management/remove-node>` for more information.
+ See :doc:`Remove a Node from a ScyllaDB Cluster </operating-scylla/procedures/cluster-management/remove-node>` for more information.


This command allows you to remove a node from the cluster when the status of the node is Down Normal (DN) and all attempts to restore the node have failed.
diff --git a/docs/operating-scylla/nodetool-commands/repair.rst b/docs/operating-scylla/nodetool-commands/repair.rst
--- a/docs/operating-scylla/nodetool-commands/repair.rst
+++ b/docs/operating-scylla/nodetool-commands/repair.rst
@@ -16,7 +16,7 @@ To repair **all** of the data in the cluster, you need to run a repair on **all*
It is strongly recommended to **not** do **any** maintenance operations (add/remove/decommission/replace/rebuild) **or** schema changes (CREATE/DROP/TRUNCATE/ALTER CQL commands) while repairs are running. Repairs running during any of these operations are likely ro result in an error.


-Scylla nodetool repair command supports the following options:
+ScyllaDB nodetool repair command supports the following options:


- ``-dc`` ``--in-dc`` syncs the **repair master** data subset between all nodes in one Data Center (DC).
diff --git a/docs/operating-scylla/nodetool-commands/ring.rst b/docs/operating-scylla/nodetool-commands/ring.rst
--- a/docs/operating-scylla/nodetool-commands/ring.rst
+++ b/docs/operating-scylla/nodetool-commands/ring.rst
@@ -2,7 +2,7 @@ Nodetool ring
=============
**ring** ``[<keyspace>] [<table>]`` - The nodetool ring command displays the token
ring information. The token ring is responsible for managing the
-partitioning of data within the Scylla cluster. This command is
+partitioning of data within the ScyllaDB cluster. This command is
critical if a cluster is facing data consistency issues.

By default, ``ring`` command shows all keyspaces.
diff --git a/docs/operating-scylla/nodetool-commands/setlogginglevel.rst b/docs/operating-scylla/nodetool-commands/setlogginglevel.rst
--- a/docs/operating-scylla/nodetool-commands/setlogginglevel.rst
+++ b/docs/operating-scylla/nodetool-commands/setlogginglevel.rst
@@ -3,7 +3,7 @@ Nodetool setlogginglevel

**setlogginglevel** sets the level log threshold for a given component or class during runtime. If this command is called with no parameters, the log level is reset to the initial configuration.

-.. note:: Using trace or debug logging levels will create very large log files where the readers may not find what they are looking for. It is best to use these levels for a very short period of time or with the help of Scylla Support.
+.. note:: Using trace or debug logging levels will create very large log files where the readers may not find what they are looking for. It is best to use these levels for a very short period of time or with the help of ScyllaDB Support.

.. code-block:: shell

diff --git a/docs/operating-scylla/nodetool-commands/settraceprobability.rst b/docs/operating-scylla/nodetool-commands/settraceprobability.rst
--- a/docs/operating-scylla/nodetool-commands/settraceprobability.rst
+++ b/docs/operating-scylla/nodetool-commands/settraceprobability.rst
@@ -7,9 +7,9 @@ Anything in between is a percentage of the time, converted into a decimal. For e
This command is useful to determine the cause of intermittent query performance problems by identifying which queries are responsible.
It can trace some or all the queries sent to the cluster, setting the probability to 1.0 will trace everything, set to a lower number will reduce the traced queries.
Use caution when setting the ``settraceprobability`` high, it can affect active systems, as system-wide tracing will have a performance impact.
-Trace information is stored under ``system_traces`` keyspace for more information you can read our `CQL tracing in Scylla`_ blog
+Trace information is stored under ``system_traces`` keyspace for more information you can read our `CQL tracing in ScyllaDB`_ blog

-.. _`CQL tracing in Scylla`: https://www.scylladb.com/2016/08/04/cql-tracing/
+.. _`CQL tracing in ScyllaDB`: https://www.scylladb.com/2016/08/04/cql-tracing/

For example, to set the probability to 10%:

diff --git a/docs/operating-scylla/nodetool-commands/snapshot.rst b/docs/operating-scylla/nodetool-commands/snapshot.rst
--- a/docs/operating-scylla/nodetool-commands/snapshot.rst
+++ b/docs/operating-scylla/nodetool-commands/snapshot.rst
@@ -99,7 +99,7 @@ Each of the snapshots is a **hardlink** to to the SSTable directory.
la-1-big-Digest.sha1
la-1-big-Filter.db
la-1-big-Index.db
- la-1-big-Scylla.db
+ la-1-big-ScyllaDB.db
la-1-big-Statistics.db
la-1-big-Summary.db
la-1-big-TOC.txt
@@ -109,6 +109,6 @@ Additional Resources
^^^^^^^^^^^^^^^^^^^^

* :doc:`Backup your data </operating-scylla/procedures/backup-restore/backup>`
-* :doc:`Scylla Snapshots </kb/snapshots>`
+* :doc:`ScyllaDB Snapshots </kb/snapshots>`

.. include:: /rst_include/apache-copyrights.rst
diff --git a/docs/operating-scylla/nodetool-commands/status.rst b/docs/operating-scylla/nodetool-commands/status.rst
--- a/docs/operating-scylla/nodetool-commands/status.rst
+++ b/docs/operating-scylla/nodetool-commands/status.rst
@@ -54,8 +54,8 @@ Example output:
|Address |The IP address of the node. |
| | |
+----------+---------------------------------------+
-|Load |The size on disk the Scylla data takes |
-| |up (updates every 60 seconds). |
+|Load |The size on disk the ScyllaDB data |
+| | takes up (updates every 60 seconds). |
| | |
| | |
| | |
diff --git a/docs/operating-scylla/nodetool-commands/toppartitions.rst b/docs/operating-scylla/nodetool-commands/toppartitions.rst
--- a/docs/operating-scylla/nodetool-commands/toppartitions.rst
+++ b/docs/operating-scylla/nodetool-commands/toppartitions.rst
@@ -19,7 +19,7 @@ table The table name
duration The duration in milliseconds
========= ============================

-Additional parameters from Scylla 4.6
+Additional parameters from ScyllaDB 4.6

========== ===================================
Parameter Description
@@ -37,7 +37,7 @@ For example:

nodetool toppartitions nba team_roster 5000

-For Example (Starting from Scylla 4.6):
+For Example (Starting from ScyllaDB 4.6):

* listing the top partitions from *all* tables in *all* keyspaces ``nodetool toppartitions``
* listing the top partitions for the last 1000 ms ``nodetool toppartitions -d 1000``
@@ -47,7 +47,7 @@ For Example (Starting from Scylla 4.6):

.. note::

- In Scylla 4.6, **duration** parameter requires a *-d* prefix
+ In ScyllaDB 4.6, **duration** parameter requires a *-d* prefix


Example output:
@@ -88,7 +88,7 @@ Output
============= =============================================================================================
Parameter Description
============= =============================================================================================
-Partition The Partition Key, prefixed by the Keyspace and table (ks:cf) for Scylla 4.6 and later
+Partition The Partition Key, prefixed by the Keyspace and table (ks:cf) for ScyllaDB 4.6 and later
------------- ---------------------------------------------------------------------------------------------
Count The number of operations of the specified type that occurred during the specified time period
------------- ---------------------------------------------------------------------------------------------
diff --git a/docs/operating-scylla/nodetool-commands/upgradesstables.rst b/docs/operating-scylla/nodetool-commands/upgradesstables.rst
--- a/docs/operating-scylla/nodetool-commands/upgradesstables.rst
+++ b/docs/operating-scylla/nodetool-commands/upgradesstables.rst
@@ -1,9 +1,9 @@
Nodetool upgradesstables
========================

-**upgradesstables** - Upgrades each table that is not running the latest Scylla version by rewriting the SSTables.
+**upgradesstables** - Upgrades each table that is not running the latest ScyllaDB version by rewriting the SSTables.

-Note that this is *not* required when enabling mc format or upgrading to a newer Scylla version. In these cases, Scylla writes a new SSTable, either in MemTable flush or compaction, while keeping the old tables in the old format.
+Note that this is *not* required when enabling mc format or upgrading to a newer ScyllaDB version. In these cases, ScyllaDB writes a new SSTable, either in MemTable flush or compaction, while keeping the old tables in the old format.

You can specify to run this action on a specific table or keyspace or on all SSTables. Use this command when changing compression options, or encrypting/decrypting a table for encryption at rest and you want to rewrite SSTable to the new format, instead of waiting for compaction to do it for you at a later time.

diff --git a/docs/operating-scylla/nodetool-commands/version.rst b/docs/operating-scylla/nodetool-commands/version.rst
--- a/docs/operating-scylla/nodetool-commands/version.rst
+++ b/docs/operating-scylla/nodetool-commands/version.rst
@@ -1,7 +1,7 @@
Nodetool version
================
-**version** - Displays the Apache Cassandra version which your version of Scylla is most compatible with, not your current Scylla version.
-To display the Scylla version, refer to :ref:`Check your current version of Scylla <check-your-current-version-of-scylla>`.
+**version** - Displays the Apache Cassandra version which your version of ScyllaDB is most compatible with, not your current ScyllaDB version.
+To display the ScyllaDB version, refer to :ref:`Check your current version of ScyllaDB <check-your-current-version-of-scylla>`.
To display additional compatibility metrics, such as CQL spec version, refer to :ref:`SHOW VERSION <cqlsh-show-version>`.


diff --git a/docs/operating-scylla/nodetool.rst b/docs/operating-scylla/nodetool.rst
--- a/docs/operating-scylla/nodetool.rst
+++ b/docs/operating-scylla/nodetool.rst
@@ -58,7 +58,7 @@ Nodetool
nodetool-commands/viewbuildstatus
nodetool-commands/version

-The ``nodetool`` utility provides a simple command-line interface to the following exposed operations and attributes. Scylla’s nodetool is a fork of `the Apache Cassandra nodetool <https://cassandra.apache.org/doc/latest/tools/nodetool/nodetool.html>`_ with the same syntax and a subset of the operations.
+The ``nodetool`` utility provides a simple command-line interface to the following exposed operations and attributes. ScyllaDB’s nodetool is a fork of `the Apache Cassandra nodetool <https://cassandra.apache.org/doc/latest/tools/nodetool/nodetool.html>`_ with the same syntax and a subset of the operations.

.. _nodetool-generic-options:

@@ -123,7 +123,7 @@ Operations that are not listed below are currently not available.
* :doc:`resetlocalschema </operating-scylla/nodetool-commands/resetlocalschema/>` - Reset the node's local schema.
* :doc:`ring <nodetool-commands/ring/>` - The nodetool ring command display the token ring information.
* :doc:`scrub </operating-scylla/nodetool-commands/scrub>` :code:`[-m mode] [--no-snapshot] <keyspace> [<table>...]` - Scrub the SSTable files in the specified keyspace or table(s)
-* :doc:`setlogginglevel</operating-scylla/nodetool-commands/setlogginglevel>` - sets the logging level threshold for Scylla classes
+* :doc:`setlogginglevel</operating-scylla/nodetool-commands/setlogginglevel>` - sets the logging level threshold for ScyllaDB classes
* :doc:`settraceprobability </operating-scylla/nodetool-commands/settraceprobability/>` ``<value>`` - Sets the probability for tracing a request. race probability value
* :doc:`snapshot </operating-scylla/nodetool-commands/snapshot>` :code:`[-t tag] [-cf column_family] <keyspace>` - Take a snapshot of specified keyspaces or a snapshot of the specified table.
* :doc:`sstableinfo </operating-scylla/nodetool-commands/sstableinfo>` - Get information about sstables per keyspace/table.
@@ -135,7 +135,7 @@ Operations that are not listed below are currently not available.
* **tablehistograms** see :doc:`cfhistograms <nodetool-commands/cfhistograms/>`
* :doc:`tablestats </operating-scylla/nodetool-commands/tablestats/>` - Provides in-depth diagnostics regard table.
* :doc:`toppartitions </operating-scylla/nodetool-commands/toppartitions/>` - Samples cluster writes and reads and reports the most active partitions in a specified table and time frame.
-* :doc:`upgradesstables </operating-scylla/nodetool-commands/upgradesstables>` - Upgrades each table that is not running the latest Scylla version, by rewriting SSTables.
+* :doc:`upgradesstables </operating-scylla/nodetool-commands/upgradesstables>` - Upgrades each table that is not running the latest ScyllaDB version, by rewriting SSTables.
* :doc:`viewbuildstatus </operating-scylla/nodetool-commands/viewbuildstatus/>` - Shows the progress of a materialized view build.
* :doc:`version </operating-scylla/nodetool-commands/version>` - Print the DB version.

diff --git a/docs/operating-scylla/procedures/backup-restore/backup.rst b/docs/operating-scylla/procedures/backup-restore/backup.rst
--- a/docs/operating-scylla/procedures/backup-restore/backup.rst
+++ b/docs/operating-scylla/procedures/backup-restore/backup.rst
@@ -49,7 +49,7 @@ For example:

| ``$ nodetool snapshot mykeyspace``

-| The snapshot is created under Scylla data directory ``/var/lib/scylla/data``
+| The snapshot is created under ScyllaDB data directory ``/var/lib/scylla/data``
| It will have the following structure:
| ``keyspace_name/table_name-UUID/snapshots/snapshot_name``

@@ -68,11 +68,11 @@ Incremental Backup

* A snapshot
* All incremental backups and commit logs from the time of the snapshot.
- * Make sure to delete unnecessary incremental backups. Scylla does not do this automatically.
+ * Make sure to delete unnecessary incremental backups. ScyllaDB does not do this automatically.

**Procedure**

-| 1. In the ``/etc/scylla/scylla.yaml`` file set the ``incremental backups`` parameters to ``true`` and restart the Scylla service. Snapshot are created under Scylla data directory ``/var/lib/scylla/data``
+| 1. In the ``/etc/scylla/scylla.yaml`` file set the ``incremental backups`` parameters to ``true`` and restart the ScyllaDB service. Snapshot are created under ScyllaDB data directory ``/var/lib/scylla/data``
| with the following structure:
| ``keyspace_name/table_name-UUID/backups/backups_name``

@@ -83,6 +83,6 @@ Incremental Backup
Additional Resources
====================

-* :doc:`Scylla Snapshots </kb/snapshots>`
+* :doc:`ScyllaDB Snapshots </kb/snapshots>`


diff --git a/docs/operating-scylla/procedures/backup-restore/index.rst b/docs/operating-scylla/procedures/backup-restore/index.rst
--- a/docs/operating-scylla/procedures/backup-restore/index.rst
+++ b/docs/operating-scylla/procedures/backup-restore/index.rst
@@ -18,7 +18,7 @@ Backup and Restore Procedures
</div>
<div class="medium-9 columns">

-Procedures to backup and restore your Scylla data safely
+Procedures to backup and restore your ScyllaDB data safely

* :doc:`Backup your Data <backup>`

diff --git a/docs/operating-scylla/procedures/backup-restore/restore.rst b/docs/operating-scylla/procedures/backup-restore/restore.rst
--- a/docs/operating-scylla/procedures/backup-restore/restore.rst
+++ b/docs/operating-scylla/procedures/backup-restore/restore.rst
@@ -53,7 +53,7 @@ Repeat the following steps for each node in the cluster:

``sudo rm -rf /var/lib/scylla/commitlog/*``

-#. Delete all the files in the keyspace_name_table. Note that by default the snapshots are created under Scylla data directory ``/var/lib/scylla/data/keyspace_name/table_name-UUID/``.
+#. Delete all the files in the keyspace_name_table. Note that by default the snapshots are created under ScyllaDB data directory ``/var/lib/scylla/data/keyspace_name/table_name-UUID/``.

Make sure NOT to delete the existing snapshots in the process.

@@ -68,7 +68,7 @@ Repeat the following steps for each node in the cluster:
-rw-r--r-- 4 scylla scylla 10 Mar 5 08:46 nba-team_players-ka-1-Digest.sha1
-rw-r--r-- 1 scylla scylla 24 Mar 5 09:19 nba-team_players-ka-1-Filter.db
-rw-r--r-- 1 scylla scylla 218 Mar 5 09:19 nba-team_players-ka-1-Index.db
- -rw-r--r-- 1 scylla scylla 38 Mar 5 09:19 nba-team_players-ka-1-Scylla.db
+ -rw-r--r-- 1 scylla scylla 38 Mar 5 09:19 nba-team_players-ka-1-ScyllaDB.db
-rw-r--r-- 1 scylla scylla 4446 Mar 5 09:19 nba-team_players-ka-1-Statistics.db
-rw-r--r-- 1 scylla scylla 89 Mar 5 09:19 nba-team_players-ka-1-Summary.db
-rw-r--r-- 4 scylla scylla 101 Mar 5 08:46 nba-team_players-ka-1-TOC.txt
diff --git a/docs/operating-scylla/procedures/cassandra-to-scylla-migration-process.rst b/docs/operating-scylla/procedures/cassandra-to-scylla-migration-process.rst
--- a/docs/operating-scylla/procedures/cassandra-to-scylla-migration-process.rst
+++ b/docs/operating-scylla/procedures/cassandra-to-scylla-migration-process.rst
@@ -1,22 +1,22 @@

-============================================
-Apache Cassandra to Scylla Migration Process
-============================================
+==============================================
+Apache Cassandra to ScyllaDB Migration Process
+==============================================

.. note:: The following instructions apply to migrating from Apache Cassandra and **not** from DataStax Enterprise.
- The DataStax Enterprise SSTable format is incompatible with Apache Cassandra or Scylla SSTable Loader and may not migrate properly.
+ The DataStax Enterprise SSTable format is incompatible with Apache Cassandra or ScyllaDB SSTable Loader and may not migrate properly.

-Migrating data from Apache Cassandra to an eventually consistent data store such as Scylla for a high volume, low latency application and verifying its consistency is a multi-step process.
+Migrating data from Apache Cassandra to an eventually consistent data store such as ScyllaDB for a high volume, low latency application and verifying its consistency is a multi-step process.

It involves the following high-level steps:


-1. Creating the same schema from Apache Cassandra in Scylla, though there can be some variation
+1. Creating the same schema from Apache Cassandra in ScyllaDB, though there can be some variation
2. Configuring your application/s to perform dual writes (still reading only from Apache Cassandra)
3. Taking a snapshot of all to-be-migrated data from Apache Cassandra
-4. Loading the SSTable files to Scylla using the Scylla sstableloader tool + Data validation
-5. Verification period: dual writes and reads, Scylla serves reads. Logging mismatches, until a minimal data mismatch threshold is reached
-6. Apache Cassandra End Of Life: Scylla only for reads and writes
+4. Loading the SSTable files to ScyllaDB using the ScyllaDB sstableloader tool + Data validation
+5. Verification period: dual writes and reads, ScyllaDB serves reads. Logging mismatches, until a minimal data mismatch threshold is reached
+6. Apache Cassandra End Of Life: ScyllaDB only for reads and writes

.. note::

@@ -29,7 +29,7 @@ It involves the following high-level steps:

.. image:: cassandra-to-scylla-2.png

-**Forklifting:** Migrate historical data from Apache Cassandra SSTables to Scylla
+**Forklifting:** Migrate historical data from Apache Cassandra SSTables to ScyllaDB

.. image:: cassandra-to-scylla-3.png

@@ -44,11 +44,11 @@ It involves the following high-level steps:
Procedure
---------

-1. Create manually / Migrate your schema (keyspaces, tables, and user-defined type, if used) on/to your Scylla cluster. When migrating from Apache Cassandra 3.x some schema updates are required (see `limitations and known issues section`_).
+1. Create manually / Migrate your schema (keyspaces, tables, and user-defined type, if used) on/to your ScyllaDB cluster. When migrating from Apache Cassandra 3.x some schema updates are required (see `limitations and known issues section`_).

- Export schema from Apache Cassandra: ``cqlsh [IP] "-e DESC SCHEMA" > orig_schema.cql``

- - Import schema to Scylla: ``cqlsh [IP] --file 'adjusted_schema.cql'``
+ - Import schema to ScyllaDB: ``cqlsh [IP] --file 'adjusted_schema.cql'``

.. _`limitations and known issues section`: #notes-limitations-and-known-issues

@@ -64,17 +64,17 @@ Procedure

.. note::

- Scylla Open Source 3.0 and later and Scylla Enterprise 2019.1 and later support :doc:`Materialized View(MV) </using-scylla/materialized-views>` and :doc:`Secondary Index(SI) </using-scylla/secondary-indexes>`.
+ ScyllaDB Open Source 3.0 and later and ScyllaDB Enterprise 2019.1 and later support :doc:`Materialized View(MV) </using-scylla/materialized-views>` and :doc:`Secondary Index(SI) </using-scylla/secondary-indexes>`.

When migrating data from Apache Cassandra with MV or SI, you can either:

* Create the MV and SI as part of the schema so that each new insert will be indexed.
* Upload all the data with sstableloader first, and only then :ref:`create the secondary indexes <create-index-statement>` and :ref:`MVs <create-materialized-view-statement>`.

- In either case, only use the sstableloader to load the base table SSTable. Do **not** load the index and view data - let Scylla index for you.
+ In either case, only use the sstableloader to load the base table SSTable. Do **not** load the index and view data - let ScyllaDB index for you.


-2. If you wish to perform the migration process without any downtime, please configure your application/s to perform dual writes to both data stores, Apache Cassandra and Scylla (see below code snippet for dual writes). Before doing that, and as general guidance, make sure to use the client-generated timestamp (writetime). If you do not, the data on Scylla and Apache Cassandra can be considered different, while it is the same.
+2. If you wish to perform the migration process without any downtime, please configure your application/s to perform dual writes to both data stores, Apache Cassandra and ScyllaDB (see below code snippet for dual writes). Before doing that, and as general guidance, make sure to use the client-generated timestamp (writetime). If you do not, the data on ScyllaDB and Apache Cassandra can be considered different, while it is the same.

Note: your application/s should continue reading and writing from Apache Cassandra until the entire migration process is completed, data integrity validated, and dual writes and reads verification period performed to your satisfaction.

@@ -124,9 +124,9 @@ See the full code example `here <https://github.com/scylladb/scylla-code-samples

Folder path post snapshot: ``/var/lib/cassandra/data/keyspace/table-[uuid]/snapshots/[epoch_timestamp]/``

-4. We strongly advise against running the sstableloader tool directly on the Scylla cluster, as it will consume resources from Scylla. Instead you should run the sstableloader from intermediate node/s. To do that, you need to install the ``scylla-tools-core`` package (it includes the sstableloader tool).
+4. We strongly advise against running the sstableloader tool directly on the ScyllaDB cluster, as it will consume resources from ScyllaDB. Instead you should run the sstableloader from intermediate node/s. To do that, you need to install the ``scylla-tools-core`` package (it includes the sstableloader tool).

- You need to make sure you have connectivity to both the Apache Cassandra and Scylla clusters. There are two ways to do that; both require having a file system in place (RAID is optional):
+ You need to make sure you have connectivity to both the Apache Cassandra and ScyllaDB clusters. There are two ways to do that; both require having a file system in place (RAID is optional):

- Option 1 (recommended): copy the SSTable files from the Apache Cassandra cluster to a local folder on the intermediate node.

@@ -142,7 +142,7 @@ See the full code example `here <https://github.com/scylladb/scylla-code-samples

2. Restart NFS server ``sudo systemctl restart nfs-kernel-server``

- 3. Create a new folder on one of the Scylla nodes and use it as a mount point to the Apache Cassandra node
+ 3. Create a new folder on one of the ScyllaDB nodes and use it as a mount point to the Apache Cassandra node

Example:

@@ -152,29 +152,29 @@ See the full code example `here <https://github.com/scylladb/scylla-code-samples

5. If you cannot use intermediate node/s (see the previous step), then you have two options:

- - Option 1: Copy the sstable files to a local folder on one of your Scylla cluster nodes. Preferably on a disk or disk-array which is not part of the Scylla cluster RAID, yet still accessible for the sstableloader tool.
+ - Option 1: Copy the sstable files to a local folder on one of your ScyllaDB cluster nodes. Preferably on a disk or disk-array which is not part of the ScyllaDB cluster RAID, yet still accessible for the sstableloader tool.

- Note: copying it to the Scylla RAID will require sufficient disk space (Apache Cassandra SSTable snapshots size x2 < 50% of Scylla node capacity) to contain both the copied SSTables files and the entire data migrated to Scylla (keyspace RF should also be taken into account).
+ Note: copying it to the ScyllaDB RAID will require sufficient disk space (Apache Cassandra SSTable snapshots size x2 < 50% of ScyllaDB node capacity) to contain both the copied SSTables files and the entire data migrated to ScyllaDB (keyspace RF should also be taken into account).

- - Option 2: NFS mount point on Scylla nodes to the SSTable files located in the Apache Cassandra nodes (see NFS mount instructions in the previous step). This saves the additional disk space needed for the 1st option.
+ - Option 2: NFS mount point on ScyllaDB nodes to the SSTable files located in the Apache Cassandra nodes (see NFS mount instructions in the previous step). This saves the additional disk space needed for the 1st option.

Note: both the local folder and the NFS mount point paths must end with ``/[ks]/[table]`` format, used by the sstableloader for parsing purposes (see ``sstableloader help`` for more details).

-6. Use the Scylla sstableloader tool (**NOT** the Apache Cassandra one which has the same name) to load the SSTables. Running without any parameters will present the list of options and usage. Most important are the SSTables directory and the Scylla node IP.
+6. Use the ScyllaDB sstableloader tool (**NOT** the Apache Cassandra one which has the same name) to load the SSTables. Running without any parameters will present the list of options and usage. Most important are the SSTables directory and the ScyllaDB node IP.

Examples:

-- ``sstableloader -d [Scylla IP] .../[ks]/[table]``
+- ``sstableloader -d [ScyllaDB IP] .../[ks]/[table]``

- ``sstableloader -d [scylla IP] .../[mount point]`` (in ``/[ks]/[table]`` format)

-7. We recommend running several sstableloaders in parallel and utilizing all Scylla nodes as targets for SSTable loading. Start with one keyspace and its underlying SSTable files from all Apache Cassandra nodes. After completion, continue to the next keyspace and so on.
+7. We recommend running several sstableloaders in parallel and utilizing all ScyllaDB nodes as targets for SSTable loading. Start with one keyspace and its underlying SSTable files from all Apache Cassandra nodes. After completion, continue to the next keyspace and so on.

Note: limit the sstableloader speed by using the throttling ``-t`` parameter, considering your physical HW, live traffic load, and network utilization (see sstableloader help for more details).

-8. Once you completed loading the SSTable files from all keyspaces, you can use ``cqlsh`` or any other tool to validate the data migrated successfully. We strongly recommend configuring your application to perform both writes and reads to/from both data stores. Apache Cassandra (as is, up to this point) and Scylla (now as primary) for a verification period. Keep track of the number of requests for which the data in both these data stores are mismatched.
+8. Once you completed loading the SSTable files from all keyspaces, you can use ``cqlsh`` or any other tool to validate the data migrated successfully. We strongly recommend configuring your application to perform both writes and reads to/from both data stores. Apache Cassandra (as is, up to this point) and ScyllaDB (now as primary) for a verification period. Keep track of the number of requests for which the data in both these data stores are mismatched.

-9. **Apache Cassandra end of life:** once you are confident in your Scylla cluster, you can flip the flag in your application/s, stop writes and reads against the Cassandra cluster, and make Scylla your sole target/source.
+9. **Apache Cassandra end of life:** once you are confident in your ScyllaDB cluster, you can flip the flag in your application/s, stop writes and reads against the Cassandra cluster, and make ScyllaDB your sole target/source.


Failure Handling
@@ -186,57 +186,57 @@ Each loading job is per keyspace/table_name, that means in any case of failure,

**What should I do if an Apache Cassandra node fails?**

-If the node that failed was a node you were loading SSTables from, then the sstableloader will also fail. If you were using RF>1 then the data exists on other node/s. Hence you can continue with the sstable loading from all the other Cassandra nodes. Once completed, all your data should be on Scylla.
+If the node that failed was a node you were loading SSTables from, then the sstableloader will also fail. If you were using RF>1 then the data exists on other node/s. Hence you can continue with the sstable loading from all the other Cassandra nodes. Once completed, all your data should be on ScyllaDB.

-**What should I do if a Scylla node fails?**
+**What should I do if a ScyllaDB node fails?**

-If the node that failed was a node you were loading sstables to, then the sstableloader will also fail. Restart the loading job and use a different Scylla node as your target.
+If the node that failed was a node you were loading sstables to, then the sstableloader will also fail. Restart the loading job and use a different ScyllaDB node as your target.

**How to rollback and start from scratch?**

-1. Stop the dual writes to Scylla
-2. Stop Scylla service ``sudo systemctl stop scylla-server``
-3. Use ``cqlsh`` to perform ``truncate`` on all data already loaded to Scylla
-4. Start the dual writes again to Scylla
+1. Stop the dual writes to ScyllaDB
+2. Stop ScyllaDB service ``sudo systemctl stop scylla-server``
+3. Use ``cqlsh`` to perform ``truncate`` on all data already loaded to ScyllaDB
+4. Start the dual writes again to ScyllaDB
5. Take a new snapshot of all Cassandra nodes
-6. Start loading SSTables again to Scylla from the NEW snapshot folder
+6. Start loading SSTables again to ScyllaDB from the NEW snapshot folder


Notes, Limitations and Known Issues
-----------------------------------

-#. The ``Duration`` data type is only supported in Scylla 2.1 and later (`issue-2240 <https://github.com/scylladb/scylla/issues/2240>`_). This is relevant only when migrating from Apache Cassandra 3.X.
+#. The ``Duration`` data type is only supported in ScyllaDB 2.1 and later (`issue-2240 <https://github.com/scylladb/scylla/issues/2240>`_). This is relevant only when migrating from Apache Cassandra 3.X.

-#. Changes in table schema from Apache Cassandra 3.0 that requires adjustments for Scylla 2.x, 1.x table schema:
+#. Changes in table schema from Apache Cassandra 3.0 that requires adjustments for ScyllaDB 2.x, 1.x table schema:

- Changes in create table (`issue-8384 <https://issues.apache.org/jira/browse/CASSANDRA-8384>`_)
- ``crc_check_chance`` out of compression options (`issue-9839 <https://issues.apache.org/jira/browse/CASSANDRA-9839>`_)

-#. Scylla 2.x CQL client ``cqlsh`` does not display the millisecond values of a ``timestamp`` data type. (`scylla-tools-java/issues #36 <https://github.com/scylladb/scylla-tools-java/issues/36>`_)
+#. ScyllaDB 2.x CQL client ``cqlsh`` does not display the millisecond values of a ``timestamp`` data type. (`scylla-tools-java/issues #36 <https://github.com/scylladb/scylla-tools-java/issues/36>`_)

-#. ``Nodetool tablestats`` partition keys (estimated) number in Scylla, post migration from Apache Cassandra, differs by 20% less up to 120% more than the original amount in Cassandra (`issue-2545 <https://github.com/scylladb/scylla/issues/2545>`_)
+#. ``Nodetool tablestats`` partition keys (estimated) number in ScyllaDB, post migration from Apache Cassandra, differs by 20% less up to 120% more than the original amount in Cassandra (`issue-2545 <https://github.com/scylladb/scylla/issues/2545>`_)

-#. Scylla 2.x is using Apache Cassandra 2.x file format. This means that migrating from Apache Cassandra 3.x to Scylla 2.x will result in a different storage space of the same data on the Scylla cluster. Scylla 3.x uses the same format as Cassandra 3.x
+#. ScyllaDB 2.x is using Apache Cassandra 2.x file format. This means that migrating from Apache Cassandra 3.x to ScyllaDB 2.x will result in a different storage space of the same data on the ScyllaDB cluster. ScyllaDB 3.x uses the same format as Cassandra 3.x

Counters
^^^^^^^^
-In version 2.1, Apache Cassandra changed how the counters work. The previous design had some hard to fix issues, which meant that there is no safe and general way of converting counter data from the old format to the new one. As a result, counter cells created before version 2.1 may contain old-format information even **after** migration to the latest Cassandra version. As Scylla implements only the new counter design, this imposes restrictions on how counters can be migrated from Cassandra.
+In version 2.1, Apache Cassandra changed how the counters work. The previous design had some hard to fix issues, which meant that there is no safe and general way of converting counter data from the old format to the new one. As a result, counter cells created before version 2.1 may contain old-format information even **after** migration to the latest Cassandra version. As ScyllaDB implements only the new counter design, this imposes restrictions on how counters can be migrated from Cassandra.

-Copying counter SSTables over to Scylla is unsafe and, by default, disallowed. Even if you use sstableloader, which is a safe way to copy the tables, it will refuse to load data in the legacy format.
+Copying counter SSTables over to ScyllaDB is unsafe and, by default, disallowed. Even if you use sstableloader, which is a safe way to copy the tables, it will refuse to load data in the legacy format.

-**Schema differences between Apache Cassandra 4.x and Scylla 4.x**
+**Schema differences between Apache Cassandra 4.x and ScyllaDB 4.x**

-The following table illustrates the default schema differences between Apache Cassandra 4.x and Scylla 3.x
+The following table illustrates the default schema differences between Apache Cassandra 4.x and ScyllaDB 3.x

Notable differences:

- Since CDC is implemented differently in Cassandra, 'cdc=false' in the Cassandra schema, should be changed to cdc = {'enabled': 'false'}

-- additional_write_policy = '99p' is **NOT** supported in Scylla; make sure you remove it from the schema.
+- additional_write_policy = '99p' is **NOT** supported in ScyllaDB; make sure you remove it from the schema.

-- extensions = {} is **NOT** supported in Scylla; make sure you remove it from the schema.
+- extensions = {} is **NOT** supported in ScyllaDB; make sure you remove it from the schema.

-- read_repair = 'BLOCKING' is **NOT** supported in Scylla; make sure you remove it from the schema.
+- read_repair = 'BLOCKING' is **NOT** supported in ScyllaDB; make sure you remove it from the schema.

- In the expression compression = {'chunk_length_in_kb': '16', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}', replace 'compression" 'class': 'org.apache.cassandra.io.compress.LZ4Compressor' with 'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'

@@ -245,22 +245,22 @@ Notable differences:

.. note::

- If you used the same Counter SSTables with Apache Cassandra from before version 2.1, the migration to Scylla would not work
+ If you used the same Counter SSTables with Apache Cassandra from before version 2.1, the migration to ScyllaDB would not work


-**Schema differences between Apache Cassandra 3.x and Scylla 2.x and 1.x**
+**Schema differences between Apache Cassandra 3.x and ScyllaDB 2.x and 1.x**

-The following table illustrates the default schema differences between Apache Cassandra 3.x and Scylla 2.x, 1.x
+The following table illustrates the default schema differences between Apache Cassandra 3.x and ScyllaDB 2.x, 1.x

Notable differences:

-- 'caching' section is supported in Scylla, yet requires adjustments to the schema (see below).
+- 'caching' section is supported in ScyllaDB, yet requires adjustments to the schema (see below).

-- 'crc_check_chance' (marked in **bold**) is **NOT** supported in Scylla; make sure you remove it from the schema.
+- 'crc_check_chance' (marked in **bold**) is **NOT** supported in ScyllaDB; make sure you remove it from the schema.


+---------------------------------------------------------------+---------------------------------------------------------------+
-|Apache Cassandra 3.10 (uses 3.x Schema) |Scylla 2.x 1.x (uses Apache Cassandra 2.1 Schema) |
+|Apache Cassandra 3.10 (uses 3.x Schema) |ScyllaDB 2.x 1.x (uses Apache Cassandra 2.1 Schema) |
+===============================================================+===============================================================+
|.. code-block:: cql |.. code-block:: cql |
| | |
@@ -327,9 +327,9 @@ Notable differences:
| AND read_repair_chance = 0.0; | AND read_repair_chance = 0.0; |
+---------------------------------------------------------------+---------------------------------------------------------------+

-More on :doc:`Scylla and Apache Cassandra Compatibility </using-scylla/cassandra-compatibility/>`
+More on :doc:`ScyllaDB and Apache Cassandra Compatibility </using-scylla/cassandra-compatibility/>`

-Also see the `Migrating to Scylla lesson <https://university.scylladb.com/courses/scylla-operations/lessons/migrating-to-scylla/>`_ on Scylla University.
+Also see the `Migrating to ScyllaDB lesson <https://university.scylladb.com/courses/scylla-operations/lessons/migrating-to-scylla/>`_ on ScyllaDB University.

.. include:: /rst_include/apache-copyrights-index.rst

diff --git a/docs/operating-scylla/procedures/cluster-management/_common/match_version.rst b/docs/operating-scylla/procedures/cluster-management/_common/match_version.rst
--- a/docs/operating-scylla/procedures/cluster-management/_common/match_version.rst
+++ b/docs/operating-scylla/procedures/cluster-management/_common/match_version.rst
@@ -1,10 +1,10 @@
.. Note::

- Make sure to use the same Scylla **patch release** on the new/replaced node, to match the rest of the cluster. It is not recommended to add a new node with a different release to the cluster.
- For example, use the following for installing Scylla patch release (use your deployed version)
+ Make sure to use the same ScyllaDB **patch release** on the new/replaced node, to match the rest of the cluster. It is not recommended to add a new node with a different release to the cluster.
+ For example, use the following for installing ScyllaDB patch release (use your deployed version)

- * Scylla Enterprise - ``sudo yum install scylla-enterprise-2018.1.9``
+ * ScyllaDB Enterprise - ``sudo yum install scylla-enterprise-2018.1.9``

- * Scylla open source - ``sudo yum install scylla-3.0.3``
+ * ScyllaDB open source - ``sudo yum install scylla-3.0.3``


diff --git a/docs/operating-scylla/procedures/cluster-management/_common/prereq.rst b/docs/operating-scylla/procedures/cluster-management/_common/prereq.rst
--- a/docs/operating-scylla/procedures/cluster-management/_common/prereq.rst
+++ b/docs/operating-scylla/procedures/cluster-management/_common/prereq.rst
@@ -1,5 +1,5 @@
* cluster_name - ``grep cluster_name /etc/scylla/scylla.yaml``
* seeds - ``grep seeds: /etc/scylla/scylla.yaml``
* endpoint_snitch - ``grep endpoint_snitch /etc/scylla/scylla.yaml``
-* Scylla version - ``scylla --version``
+* ScyllaDB version - ``scylla --version``
* Authenticator - ``grep authenticator /etc/scylla/scylla.yaml``
\ No newline at end of file
diff --git a/docs/operating-scylla/procedures/cluster-management/add-dc-to-existing-dc.rst b/docs/operating-scylla/procedures/cluster-management/add-dc-to-existing-dc.rst
--- a/docs/operating-scylla/procedures/cluster-management/add-dc-to-existing-dc.rst
+++ b/docs/operating-scylla/procedures/cluster-management/add-dc-to-existing-dc.rst
@@ -3,7 +3,7 @@ Adding a New Data Center Into an Existing ScyllaDB Cluster

.. scylladb_include_flag:: upgrade-note-add-new-dc.rst

-The following procedure specifies how to add a Data Center (DC) to a live Scylla Cluster, in a single data center, :ref:`multi-availability zone <faq-best-scenario-node-multi-availability-zone>`, or multi-datacenter. Adding a DC out-scales the cluster and provides higher availability (HA).
+The following procedure specifies how to add a Data Center (DC) to a live ScyllaDB Cluster, in a single data center, :ref:`multi-availability zone <faq-best-scenario-node-multi-availability-zone>`, or multi-datacenter. Adding a DC out-scales the cluster and provides higher availability (HA).

The procedure includes:

@@ -31,8 +31,8 @@ Prerequisites

#. On all client applications, switch the consistency level to ``LOCAL_*`` (LOCAL_ONE, LOCAL_QUORUM,etc.) to prevent the coordinators from accessing the data center you're adding.

-#. Install the new **clean** Scylla nodes (See `Clean Data from Nodes`_ below) on the new datacenter, see :doc:`Getting Started </getting-started/index>` for further instructions, create as many nodes that you need.
- Follow the Scylla install procedure up to ``scylla.yaml`` configuration phase.
+#. Install the new **clean** ScyllaDB nodes (See `Clean Data from Nodes`_ below) on the new datacenter, see :doc:`Getting Started </getting-started/index>` for further instructions, create as many nodes that you need.
+ Follow the ScyllaDB install procedure up to ``scylla.yaml`` configuration phase.
In the case that the node starts during the installation process follow :doc:`these instructions </operating-scylla/procedures/cluster-management/clear-data>`.

.. include:: /operating-scylla/procedures/cluster-management/_common/quorum-requirement.rst
@@ -119,7 +119,7 @@ Add New DC

* **cluster_name** - Set the selected cluster_name.
* **seeds** - IP address of an existing node (or nodes).
- * **listen_address** - IP address that Scylla used to connect to the other Scylla nodes in the cluster.
+ * **listen_address** - IP address that ScyllaDB used to connect to the other ScyllaDB nodes in the cluster.
* **endpoint_snitch** - Set the selected snitch.
* **rpc_address** - Address for CQL client connections.

@@ -201,9 +201,9 @@ Add New DC

The rebuild ensures that the new nodes that were just added to the cluster will recognize the existing datacenters in the cluster.

-#. Run a full cluster repair, using :doc:`nodetool repair -pr </operating-scylla/nodetool-commands/repair>` on each node, or using `Scylla Manager ad-hoc repair <https://manager.docs.scylladb.com/stable/repair>`_
+#. Run a full cluster repair, using :doc:`nodetool repair -pr </operating-scylla/nodetool-commands/repair>` on each node, or using `ScyllaDB Manager ad-hoc repair <https://manager.docs.scylladb.com/stable/repair>`_

-#. If you are using Scylla Monitoring, update the `monitoring stack <https://monitoring.docs.scylladb.com/stable/install/monitoring_stack.html#configure-scylla-nodes-from-files>`_ to monitor it. If you are using Scylla Manager, make sure you install the `Manager Agent <https://manager.docs.scylladb.com/stable/install-scylla-manager-agent.html>`_ and Manager can access the new DC.
+#. If you are using ScyllaDB Monitoring, update the `monitoring stack <https://monitoring.docs.scylladb.com/stable/install/monitoring_stack.html#configure-scylla-nodes-from-files>`_ to monitor it. If you are using ScyllaDB Manager, make sure you install the `Manager Agent <https://manager.docs.scylladb.com/stable/install-scylla-manager-agent.html>`_ and Manager can access the new DC.


Configure the Client not to Connect to the New DC
diff --git a/docs/operating-scylla/procedures/cluster-management/add-node-to-cluster.rst b/docs/operating-scylla/procedures/cluster-management/add-node-to-cluster.rst
--- a/docs/operating-scylla/procedures/cluster-management/add-node-to-cluster.rst
+++ b/docs/operating-scylla/procedures/cluster-management/add-node-to-cluster.rst
@@ -31,7 +31,7 @@ Log into one of the nodes in the cluster to collect the following information:
Procedure
---------

-#. Install ScyllaDB on the nodes you want to add to the cluster. See :doc:`Getting Started</getting-started/index>` for further instructions. Follow the Scylla installation procedure up to ``scylla.yaml`` configuration phase. Make sure that the Scylla version of the new node is identical to the other nodes in the cluster.
+#. Install ScyllaDB on the nodes you want to add to the cluster. See :doc:`Getting Started</getting-started/index>` for further instructions. Follow the ScyllaDB installation procedure up to ``scylla.yaml`` configuration phase. Make sure that the ScyllaDB version of the new node is identical to the other nodes in the cluster.

If the node starts during the process, follow :doc:`What to do if a Node Starts Automatically </operating-scylla/procedures/cluster-management/clear-data>`.

@@ -43,7 +43,7 @@ Procedure

* **cluster_name** - Specifies the name of the cluster.

- * **listen_address** - Specifies the IP address that Scylla used to connect to the other Scylla nodes in the cluster.
+ * **listen_address** - Specifies the IP address that ScyllaDB used to connect to the other ScyllaDB nodes in the cluster.

* **endpoint_snitch** - Specifies the selected snitch.

@@ -98,7 +98,7 @@ Procedure

#. Wait until the new node becomes UN (Up Normal) in the output of :doc:`nodetool status </operating-scylla/nodetool-commands/status>` on one of the old nodes.

-#. If you are using Scylla Monitoring, update the `monitoring stack <https://monitoring.docs.scylladb.com/stable/install/monitoring_stack.html#configure-scylla-nodes-from-files>`_ to monitor it. If you are using Scylla Manager, make sure you install the `Manager Agent <https://manager.docs.scylladb.com/stable/install-scylla-manager-agent.html>`_, and Manager can access it.
+#. If you are using ScyllaDB Monitoring, update the `monitoring stack <https://monitoring.docs.scylladb.com/stable/install/monitoring_stack.html#configure-scylla-nodes-from-files>`_ to monitor it. If you are using ScyllaDB Manager, make sure you install the `Manager Agent <https://manager.docs.scylladb.com/stable/install-scylla-manager-agent.html>`_, and Manager can access it.


.. _add-new-node-upgrade-info:
diff --git a/docs/operating-scylla/procedures/cluster-management/clear-data.rst b/docs/operating-scylla/procedures/cluster-management/clear-data.rst
--- a/docs/operating-scylla/procedures/cluster-management/clear-data.rst
+++ b/docs/operating-scylla/procedures/cluster-management/clear-data.rst
@@ -2,21 +2,21 @@
What to do if a Node Starts Automatically
=========================================

-If, for any reason, the Scylla service started before you had a chance to update the configuration file, some of the system tables may already reflect an incorrect status, and unfortunately, a simple restart will not fix the issue.
+If, for any reason, the ScyllaDB service started before you had a chance to update the configuration file, some of the system tables may already reflect an incorrect status, and unfortunately, a simple restart will not fix the issue.
In this case, the safest way is to stop the service, clean all of the data, and start the service again.

Procedure
---------

-#. Stop the Scylla service.
+#. Stop the ScyllaDB service.

.. include:: /rst_include/scylla-commands-stop-index.rst

#. Delete the Data and Commitlog folders.

.. include:: /rst_include/clean-data-code.rst

-#. Start the Scylla service.
+#. Start the ScyllaDB service.

.. include:: /rst_include/scylla-commands-start-index.rst

@@ -25,6 +25,6 @@ Procedure
Additional Topics
-----------------

-:doc:`Create a Scylla Cluster - Single Data Center (DC) </operating-scylla/procedures/cluster-management/create-cluster>`
+:doc:`Create a ScyllaDB Cluster - Single Data Center (DC) </operating-scylla/procedures/cluster-management/create-cluster>`

-:doc:`Scylla Procedures </operating-scylla/procedures/cluster-management/index/>`
+:doc:`ScyllaDB Procedures </operating-scylla/procedures/cluster-management/index/>`
diff --git a/docs/operating-scylla/procedures/cluster-management/create-cluster-multidc.rst b/docs/operating-scylla/procedures/cluster-management/create-cluster-multidc.rst
--- a/docs/operating-scylla/procedures/cluster-management/create-cluster-multidc.rst
+++ b/docs/operating-scylla/procedures/cluster-management/create-cluster-multidc.rst
@@ -55,8 +55,8 @@ When working with production environments, you must choose one of the snitches b
Procedure
---------

-1. Install Scylla on the nodes you want to add to the cluster. See :doc:`Getting Started</getting-started/index>` for further instructions, create as many nodes that you need.
-Follow the Scylla install procedure up to scylla.yaml configuration phase.
+1. Install ScyllaDB on the nodes you want to add to the cluster. See :doc:`Getting Started</getting-started/index>` for further instructions, create as many nodes that you need.
+Follow the ScyllaDB install procedure up to scylla.yaml configuration phase.

In case that your node starts during the process follow :doc:`these instructions </operating-scylla/procedures/cluster-management/clear-data>`

@@ -65,14 +65,14 @@ The file can be found under ``/etc/scylla/``.

- **cluster_name** - Set the selected cluster_name
- **seeds** - Specify the IP of the node you chose to be a seed node. New nodes will use the IP of this seed node to connect to the cluster and learn the cluster topology and state.
-- **listen_address** - IP address that the Scylla use to connect to other Scylla nodes in the cluster
+- **listen_address** - IP address that the ScyllaDB use to connect to other ScyllaDB nodes in the cluster
- **endpoint_snitch** - Set the selected snitch
- **rpc_address** - Address for CQL client connection

3. In the ``cassandra-rackdc.properties`` file, edit the rack and data center information.
The file can be found under ``/etc/scylla/``.

-To save bandwidth, add the ``prefer_local=true`` parameter. Scylla will use the node private (local) IP address when the nodes are in the same data center.
+To save bandwidth, add the ``prefer_local=true`` parameter. ScyllaDB will use the node private (local) IP address when the nodes are in the same data center.

4. Start the nodes.

@@ -85,7 +85,7 @@ To save bandwidth, add the ``prefer_local=true`` parameter. Scylla will use the

In this example, we will show how to install a nine nodes cluster.

-1. Install nine Scylla nodes, three nodes in each data center (U.S, ASIA, EUROPE). The IP's are:
+1. Install nine ScyllaDB nodes, three nodes in each data center (U.S, ASIA, EUROPE). The IP's are:

.. code-block:: shell

@@ -107,7 +107,7 @@ In this example, we will show how to install a nine nodes cluster.
Node8 192.168.1.208 54.235.9.159
Node9 192.168.1.209 54.146.228.25

-2. In each Scylla node, edit the ``scylla.yaml`` file. See :ref:`Single Multi Data Centers Configuration Table <create-cluster-multi-config-table>` for reference.
+2. In each ScyllaDB node, edit the ``scylla.yaml`` file. See :ref:`Single Multi Data Centers Configuration Table <create-cluster-multi-config-table>` for reference.

**U.S Data-center - 192.168.1.201**

@@ -148,7 +148,7 @@ In this example, we will show how to install a nine nodes cluster.
broadcast_rpc_address: "54.160.174.243"
listen_on_broadcast_address: true (optional)

-3. In each Scylla node, edit the ``cassandra-rackdc.properties`` file with the relevant rack and data center information
+3. In each ScyllaDB node, edit the ``cassandra-rackdc.properties`` file with the relevant rack and data center information

**Nodes 1-3**

@@ -211,4 +211,4 @@ In this example, we will show how to install a nine nodes cluster.

See also:

-:doc:`Create a Scylla Cluster - Single Data Center (DC) </operating-scylla/procedures/cluster-management/create-cluster>`
+:doc:`Create a ScyllaDB Cluster - Single Data Center (DC) </operating-scylla/procedures/cluster-management/create-cluster>`
diff --git a/docs/operating-scylla/procedures/cluster-management/decommissioning-data-center.rst b/docs/operating-scylla/procedures/cluster-management/decommissioning-data-center.rst
--- a/docs/operating-scylla/procedures/cluster-management/decommissioning-data-center.rst
+++ b/docs/operating-scylla/procedures/cluster-management/decommissioning-data-center.rst
@@ -74,7 +74,7 @@ Procedure
cqlsh> ALTER KEYSPACE nba WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'US-DC' : 3, 'EUROPE-DC' : 3};

#. Run :doc:`nodetool decommission </operating-scylla/nodetool-commands/decommission>` on every node in the data center that is to be removed.
- Refer to :doc:`Remove a Node from a Scylla Cluster - Down Scale </operating-scylla/procedures/cluster-management/remove-node>` for further information.
+ Refer to :doc:`Remove a Node from a ScyllaDB Cluster - Down Scale </operating-scylla/procedures/cluster-management/remove-node>` for further information.

For example:

diff --git a/docs/operating-scylla/procedures/cluster-management/ec2-dc.rst b/docs/operating-scylla/procedures/cluster-management/ec2-dc.rst
--- a/docs/operating-scylla/procedures/cluster-management/ec2-dc.rst
+++ b/docs/operating-scylla/procedures/cluster-management/ec2-dc.rst
@@ -1,15 +1,15 @@
Create a ScyllaDB Cluster on EC2 (Single or Multi Data Center)
===============================================================

-The easiest way to run a Scylla cluster on EC2 is by using `Scylla AMI <https://www.scylladb.com/download/?platform=aws>`_, which is Ubuntu-based.
-To use a different OS or your own `AMI <https://en.wikipedia.org/wiki/Amazon_Machine_Image>`_ (Amazon Machine Image) or set up a multi DC Scylla cluster,
-you need to configure the Scylla cluster on your own. This page guides you through this process.
+The easiest way to run a ScyllaDB cluster on EC2 is by using `ScyllaDB AMI <https://www.scylladb.com/download/?platform=aws>`_, which is Ubuntu-based.
+To use a different OS or your own `AMI <https://en.wikipedia.org/wiki/Amazon_Machine_Image>`_ (Amazon Machine Image) or set up a multi DC ScyllaDB cluster,
+you need to configure the ScyllaDB cluster on your own. This page guides you through this process.

-A Scylla cluster on EC2 can be deployed as a single-DC cluster or a multi-DC cluster. The table below describes how to configure parameters in the ``scylla.yaml`` file for each node in your cluster for both cluster types.
+A ScyllaDB cluster on EC2 can be deployed as a single-DC cluster or a multi-DC cluster. The table below describes how to configure parameters in the ``scylla.yaml`` file for each node in your cluster for both cluster types.

-For more information on Scylla AMI and the configuration of parameters in ``scylla.yaml`` from the EC2 user data, see `Scylla Machine Image <https://github.com/scylladb/scylla-machine-image>`_.
+For more information on ScyllaDB AMI and the configuration of parameters in ``scylla.yaml`` from the EC2 user data, see `ScyllaDB Machine Image <https://github.com/scylladb/scylla-machine-image>`_.

-The best practice is to use each EC2 region as a Scylla DC. In such a case, nodes communicate using Internal (Private) IPs inside the region and using External (Public) IPs between regions (Data Centers).
+The best practice is to use each EC2 region as a ScyllaDB DC. In such a case, nodes communicate using Internal (Private) IPs inside the region and using External (Public) IPs between regions (Data Centers).

For further information, see `AWS instance addressing <http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html>`_.

@@ -52,17 +52,17 @@ Procedure
#. Install ScyllaDB on the nodes you want to add to the cluster. See :doc:`Getting Started</getting-started/index>` for installation instructions and
follow the procedure up to the ``scylla.yaml`` configuration phase.

- If the Scylla service is already running (for example, if you are using `Scylla AMI`_), stop it before moving to the next step by using :doc:`these instructions </operating-scylla/procedures/cluster-management/clear-data>`.
+ If the ScyllaDB service is already running (for example, if you are using `ScyllaDB AMI`_), stop it before moving to the next step by using :doc:`these instructions </operating-scylla/procedures/cluster-management/clear-data>`.

#. On each node, edit the ``scylla.yaml`` file in ``/etc/scylla/`` to configure the parameters listed below. See the :ref:`table` above on how to configure your cluster.

* **cluster_name** - Set the selected cluster_name.
- * **seeds** - Specify the IP of the node you chose to be a seed node. See :doc:`Scylla Seed Nodes </kb/seed-nodes/>` for details.
- * **listen_address** - IP address that Scylla used to connect to other Scylla nodes in the cluster.
+ * **seeds** - Specify the IP of the node you chose to be a seed node. See :doc:`ScyllaDB Seed Nodes </kb/seed-nodes/>` for details.
+ * **listen_address** - IP address that ScyllaDB used to connect to other ScyllaDB nodes in the cluster.
* **endpoint_snitch** - Set the selected snitch.
* **rpc_address** - Address for CQL client connection.
* **broadcast_address** - The IP address a node tells other nodes in the cluster to contact it by.
- * **broadcast_rpc_address** - Default: unset. The RPC address to broadcast to drivers and other Scylla nodes. It cannot be set to 0.0.0.0. If left blank, it will be set to the value of ``rpc_address``. If ``rpc_address`` is set to 0.0.0.0, ``broadcast_rpc_address`` must be explicitly configured.
+ * **broadcast_rpc_address** - Default: unset. The RPC address to broadcast to drivers and other ScyllaDB nodes. It cannot be set to 0.0.0.0. If left blank, it will be set to the value of ``rpc_address``. If ``rpc_address`` is set to 0.0.0.0, ``broadcast_rpc_address`` must be explicitly configured.

#. Start the nodes.

diff --git a/docs/operating-scylla/procedures/cluster-management/handling-membership-change-failures.rst b/docs/operating-scylla/procedures/cluster-management/handling-membership-change-failures.rst
--- a/docs/operating-scylla/procedures/cluster-management/handling-membership-change-failures.rst
+++ b/docs/operating-scylla/procedures/cluster-management/handling-membership-change-failures.rst
@@ -153,14 +153,14 @@ If you're executing ``removenode`` too quickly after a failed membership change,

.. code-block:: console

- nodetool: Scylla API server HTTP POST to URL '/storage_service/remove_node' failed: seastar::rpc::remote_verb_error (node_ops_cmd_check: Node 127.0.0.2 rejected node_ops_cmd=removenode_abort from node=127.0.0.1 with ops_uuid=0ba0a5ab-efbd-4801-a31c-034b5f55487c, pending_node_ops={b47523f2-de6a-4c38-8490-39127dba6b6a}, pending node ops is in progress)
+ nodetool: ScyllaDB API server HTTP POST to URL '/storage_service/remove_node' failed: seastar::rpc::remote_verb_error (node_ops_cmd_check: Node 127.0.0.2 rejected node_ops_cmd=removenode_abort from node=127.0.0.1 with ops_uuid=0ba0a5ab-efbd-4801-a31c-034b5f55487c, pending_node_ops={b47523f2-de6a-4c38-8490-39127dba6b6a}, pending node ops is in progress)

In that case simply wait for 2 minutes before trying ``removenode`` again.

If ``removenode`` returns an error like:

.. code-block:: console

- nodetool: Scylla API server HTTP POST to URL '/storage_service/remove_node' failed: std::runtime_error (removenode[12e7e05b-d1ae-4978-b6a6-de0066aa80d8]: Host ID 42405b3b-487e-4759-8590-ddb9bdcebdc5 not found in the cluster)
+ nodetool: ScyllaDB API server HTTP POST to URL '/storage_service/remove_node' failed: std::runtime_error (removenode[12e7e05b-d1ae-4978-b6a6-de0066aa80d8]: Host ID 42405b3b-487e-4759-8590-ddb9bdcebdc5 not found in the cluster)

and you're sure that you're providing the correct Host ID, it means that the member was already removed and you don't have to clean up after it.
diff --git a/docs/operating-scylla/procedures/cluster-management/index.rst b/docs/operating-scylla/procedures/cluster-management/index.rst
--- a/docs/operating-scylla/procedures/cluster-management/index.rst
+++ b/docs/operating-scylla/procedures/cluster-management/index.rst
@@ -19,7 +19,7 @@ Cluster Management Procedures
rebuild-node
Remove a DC <decommissioning-data-center>
Clear Data <clear-data>
- Add a Decommissioned Node Back to a Scylla Cluster <revoke-decommission>
+ Add a Decommissioned Node Back to a ScyllaDB Cluster <revoke-decommission>
Remove a Seed Node from Seed List <remove-seed>
Update Topology Strategy From Simple to Network <update-topology-strategy-from-simple-to-network>
Safely Shutdown Your Cluster <safe-shutdown>
diff --git a/docs/operating-scylla/procedures/cluster-management/rebuild-node.rst b/docs/operating-scylla/procedures/cluster-management/rebuild-node.rst
--- a/docs/operating-scylla/procedures/cluster-management/rebuild-node.rst
+++ b/docs/operating-scylla/procedures/cluster-management/rebuild-node.rst
@@ -2,10 +2,10 @@
Rebuild a Node After Losing the Data Volume
============================================

-When running in Scylla on EC2, it is recommended to use i3 type instances, storing the data on fast, ephemeral SSD drives.
+When running in ScyllaDB on EC2, it is recommended to use i3 type instances, storing the data on fast, ephemeral SSD drives.
Stopping the node and starting it for whatever reason means the data on the node is lost.
This data loss applies not only to i3 type instances but to the i2 type as well.
-The good news is Scylla is a HA database, and replicas of the data are stored on additional nodes.
+The good news is ScyllaDB is a HA database, and replicas of the data are stored on additional nodes.
This is also why it's highly recommended to use a replication factor of at least three per Data Center (for example, 1 DC, RF = 3, 2 DCs RF = 6).

To recover the data and rebuild the node, follow this procedure:
@@ -14,12 +14,12 @@ To recover the data and rebuild the node, follow this procedure:

#. Add, if not present, else edit, the ``replace_node_first_boot`` parameter and change it to the
Host ID of the node before it restarted.
-#. Stop Scylla Server
+#. Stop ScyllaDB Server

.. include:: /rst_include/scylla-commands-stop-index.rst

#. If there are multiple disks, execute a RAID setup for the disks by running the following script: ``/opt/scylladb/scylla-machine-image/scylla_create_devices``.
-#. Start Scylla Server
+#. Start ScyllaDB Server

.. include:: /rst_include/scylla-commands-start-index.rst

diff --git a/docs/operating-scylla/procedures/cluster-management/remove-seed.rst b/docs/operating-scylla/procedures/cluster-management/remove-seed.rst
--- a/docs/operating-scylla/procedures/cluster-management/remove-seed.rst
+++ b/docs/operating-scylla/procedures/cluster-management/remove-seed.rst
@@ -5,7 +5,7 @@ Remove a Seed Node from Seed List
This procedure describes how to remove a seed node from the seed list.

.. note::
- The seed concept in gossip has been removed. Starting with Scylla Open Source 4.3 and Scylla Enterprise 2021.1, a seed node
+ The seed concept in gossip has been removed. Starting with ScyllaDB Open Source 4.3 and ScyllaDB Enterprise 2021.1, a seed node
is only used by a new node during startup to learn about the cluster topology. As a result, you only need to configure one
seed node in a node's ``scylla.yaml`` file.

@@ -19,7 +19,7 @@ Verify that the seed node you want to remove is listed as a seed node in the ``s
Procedure
---------

-1. Update the Scylla configuration file, scylla.yaml, which can be found under ``/etc/scylla/``. For example:
+1. Update the ScyllaDB configuration file, scylla.yaml, which can be found under ``/etc/scylla/``. For example:

Seed list before removing the node:

@@ -33,6 +33,6 @@ Seed list after removing the node:

- seeds: "10.240.0.83,10.240.0.93"

-2. Scylla will read the updated seed list the next time it starts. You can force Scylla to read the list immediately by restarting Scylla as follows:
+2. ScyllaDB will read the updated seed list the next time it starts. You can force ScyllaDB to read the list immediately by restarting ScyllaDB as follows:

.. include:: /rst_include/scylla-commands-restart-index.rst
diff --git a/docs/operating-scylla/procedures/cluster-management/replace-dead-node-or-more.rst b/docs/operating-scylla/procedures/cluster-management/replace-dead-node-or-more.rst
--- a/docs/operating-scylla/procedures/cluster-management/replace-dead-node-or-more.rst
+++ b/docs/operating-scylla/procedures/cluster-management/replace-dead-node-or-more.rst
@@ -30,7 +30,7 @@ Login to one of the nodes in the cluster with (UN) status, collect the following
* cluster_name - ``cat /etc/scylla/scylla.yaml | grep cluster_name``
* seeds - ``cat /etc/scylla/scylla.yaml | grep seeds:``
* endpoint_snitch - ``cat /etc/scylla/scylla.yaml | grep endpoint_snitch``
-* Scylla version - ``scylla --version``
+* ScyllaDB version - ``scylla --version``

Procedure
---------
diff --git a/docs/operating-scylla/procedures/cluster-management/replace-dead-node.rst b/docs/operating-scylla/procedures/cluster-management/replace-dead-node.rst
--- a/docs/operating-scylla/procedures/cluster-management/replace-dead-node.rst
+++ b/docs/operating-scylla/procedures/cluster-management/replace-dead-node.rst
@@ -48,21 +48,21 @@ Login to one of the nodes in the cluster with the UN status. Collect the followi
* cluster_name - ``cat /etc/scylla/scylla.yaml | grep cluster_name``
* seeds - ``cat /etc/scylla/scylla.yaml | grep seeds:``
* endpoint_snitch - ``cat /etc/scylla/scylla.yaml | grep endpoint_snitch``
- * Scylla version - ``scylla --version``
+ * ScyllaDB version - ``scylla --version``

---------
Procedure
---------

-#. Install Scylla on a new node, see :doc:`Getting Started</getting-started/index>` for further instructions. Follow the Scylla install procedure up to ``scylla.yaml`` configuration phase. Ensure that the Scylla version of the new node is identical to the other nodes in the cluster.
+#. Install ScyllaDB on a new node, see :doc:`Getting Started</getting-started/index>` for further instructions. Follow the ScyllaDB install procedure up to ``scylla.yaml`` configuration phase. Ensure that the ScyllaDB version of the new node is identical to the other nodes in the cluster.

.. include:: /operating-scylla/procedures/cluster-management/_common/match_version.rst

#. In the ``scylla.yaml`` file edit the parameters listed below. The file can be found under ``/etc/scylla/``.

- **cluster_name** - Set the selected cluster_name

- - **listen_address** - IP address that Scylla uses to connect to other Scylla nodes in the cluster
+ - **listen_address** - IP address that ScyllaDB uses to connect to other ScyllaDB nodes in the cluster

- **seeds** - Set the seed nodes

@@ -149,7 +149,7 @@ Procedure
UN 192.168.1.202 91.11 KB 256 32.9% 125ed9f4-7777-1dbn-mac8-43fddce9123e B1
UN 192.168.1.204 124.42 KB 256 32.6% 655ae64d-e3fb-45cc-9792-2b648b151b67 B1

-#. Run the ``nodetool repair`` command on the node that was replaced to make sure that the data is synced with the other nodes in the cluster. You can use `Scylla Manager <https://manager.docs.scylladb.com/>`_ to run the repair.
+#. Run the ``nodetool repair`` command on the node that was replaced to make sure that the data is synced with the other nodes in the cluster. You can use `ScyllaDB Manager <https://manager.docs.scylladb.com/>`_ to run the repair.

.. note::
When :doc:`Repair Based Node Operations (RBNO) <repair-based-node-operation>` for **replace** is enabled, there is no need to rerun repair.
@@ -166,7 +166,7 @@ In case you need to to restart (stop + start, not reboot) an instance with ephem

In this case, the node's data will be cleaned after restart. To remedy this, you need to recreate the RAID again.

-#. Stop the Scylla server on the node you restarted. The rest of the commands will run on this node as well.
+#. Stop the ScyllaDB server on the node you restarted. The rest of the commands will run on this node as well.

.. include:: /rst_include/scylla-commands-stop-index.rst

@@ -188,7 +188,7 @@ In this case, the node's data will be cleaned after restart. To remedy this, you

sudo /opt/scylladb/scylla-machine-image/scylla_create_devices

-#. Start Scylla Server
+#. Start ScyllaDB Server

.. include:: /rst_include/scylla-commands-start-index.rst

diff --git a/docs/operating-scylla/procedures/cluster-management/replace-running-node.rst b/docs/operating-scylla/procedures/cluster-management/replace-running-node.rst
--- a/docs/operating-scylla/procedures/cluster-management/replace-running-node.rst
+++ b/docs/operating-scylla/procedures/cluster-management/replace-running-node.rst
@@ -2,7 +2,7 @@
Replace a Running Node in a ScyllaDB Cluster
*********************************************

-There are two methods to replace a running node in a Scylla cluster.
+There are two methods to replace a running node in a ScyllaDB cluster.

#. `Add a new node to the cluster and then decommission the old node`_
#. `Replace a running node - by taking its place in the cluster`_
@@ -16,17 +16,17 @@ Add a new node to the cluster and then decommission the old node
=================================================================

Adding a new node and only then decommissioning the old node allows the cluster to keep the same level of data replication throughout the process, but at the cost of more data being transferred during the procedure.
-When adding a new node to a Scylla cluster, existing nodes will give the new node responsibility for a subset of their vNodes, making sure that data is once again equally distributed. In the process, these nodes will stream relevant data to the new node.
-When decommissioning a node from a Scylla cluster, it will give its vNodes to other nodes, making sure data is once again equally distributed. In the process, this node will stream its data to the other nodes.
+When adding a new node to a ScyllaDB cluster, existing nodes will give the new node responsibility for a subset of their vNodes, making sure that data is once again equally distributed. In the process, these nodes will stream relevant data to the new node.
+When decommissioning a node from a ScyllaDB cluster, it will give its vNodes to other nodes, making sure data is once again equally distributed. In the process, this node will stream its data to the other nodes.
Hence, replacing a node by adding and decommissioning redistribute the vNodes twice, streaming a node worth of data each time.


Procedure
^^^^^^^^^

-1. Follow the procedure: :doc:`Adding a New Node Into an Existing Scylla Cluster </operating-scylla/procedures/cluster-management/add-node-to-cluster/>`.
+1. Follow the procedure: :doc:`Adding a New Node Into an Existing ScyllaDB Cluster </operating-scylla/procedures/cluster-management/add-node-to-cluster/>`.

-2. Decommission the old node using the :doc:`Remove a Node from a Scylla Cluster </operating-scylla/procedures/cluster-management/remove-node>` procedure
+2. Decommission the old node using the :doc:`Remove a Node from a ScyllaDB Cluster </operating-scylla/procedures/cluster-management/remove-node>` procedure

3. Run the :doc:`nodetool cleanup </operating-scylla/nodetool-commands/cleanup/>` command on all the remaining nodes in the cluster

@@ -40,13 +40,13 @@ Stopping a node and taking its place in the cluster is not as safe as the data r

Procedure
^^^^^^^^^
-1. Run :doc:`nodetool drain </operating-scylla/nodetool-commands/drain>` command (Scylla stops listening to its connections from the client and other nodes).
+1. Run :doc:`nodetool drain </operating-scylla/nodetool-commands/drain>` command (ScyllaDB stops listening to its connections from the client and other nodes).

-2. Stop the Scylla node you want to replace
+2. Stop the ScyllaDB node you want to replace

.. include:: /rst_include/scylla-commands-stop-index.rst

-3. Follow the :doc:`Replace a Dead Node in a Scylla Cluster </operating-scylla/procedures/cluster-management/replace-dead-node/>` procedure
+3. Follow the :doc:`Replace a Dead Node in a ScyllaDB Cluster </operating-scylla/procedures/cluster-management/replace-dead-node/>` procedure

4. Verify that the node is successfully replaced using :doc:`nodetool status </operating-scylla/nodetool-commands/status>` command

diff --git a/docs/operating-scylla/procedures/cluster-management/replace-seed-node.rst b/docs/operating-scylla/procedures/cluster-management/replace-seed-node.rst
--- a/docs/operating-scylla/procedures/cluster-management/replace-seed-node.rst
+++ b/docs/operating-scylla/procedures/cluster-management/replace-seed-node.rst
@@ -4,11 +4,11 @@ Replacing a Dead Seed Node
===========================

.. note::
- The seed concept in gossip has been removed. Starting with Scylla Open Source 4.3 and Scylla Enterprise 2021.1,
+ The seed concept in gossip has been removed. Starting with ScyllaDB Open Source 4.3 and ScyllaDB Enterprise 2021.1,
a seed node is only used by a new node during startup to learn about the cluster topology. As a result, there's no need
to replace the node configured with the ``seeds`` parameter in the ``scylla.yaml`` file.

-In Scylla, it is not possible to bootstrap a seed node. The following steps describe how to replace a dead seed node.
+In ScyllaDB, it is not possible to bootstrap a seed node. The following steps describe how to replace a dead seed node.

Prerequisites
-------------
diff --git a/docs/operating-scylla/procedures/cluster-management/revoke-decommission.rst b/docs/operating-scylla/procedures/cluster-management/revoke-decommission.rst
--- a/docs/operating-scylla/procedures/cluster-management/revoke-decommission.rst
+++ b/docs/operating-scylla/procedures/cluster-management/revoke-decommission.rst
@@ -2,7 +2,7 @@
Add a Decommissioned Node Back to a ScyllaDB Cluster
*****************************************************

-This procedure describes how to add a node to a Scylla cluster after it was decommissioned.
+This procedure describes how to add a node to a ScyllaDB cluster after it was decommissioned.
In some cases, one would like to add a decommissioned node back to the cluster, for example, if the node was decommissioned by mistake. The following procedure describes the procedure of doing that, by clearing all data from it, and adding it as a new node in the cluster


@@ -40,4 +40,4 @@ Procedure

Since the node is added back to the cluster as a new node, you must delete the old node's data folder. Otherwise, the old node's state (like bootstrap status), will prevent the new node from starting its init procedure

-| 3. Follow the :doc:`Adding a New Node Into an Existing Scylla Cluster </operating-scylla/procedures/cluster-management/add-node-to-cluster/>` procedure to add the decommissioned node back into the cluster
+| 3. Follow the :doc:`Adding a New Node Into an Existing ScyllaDB Cluster </operating-scylla/procedures/cluster-management/add-node-to-cluster/>` procedure to add the decommissioned node back into the cluster
diff --git a/docs/operating-scylla/procedures/cluster-management/safely-removing-joining-node.rst b/docs/operating-scylla/procedures/cluster-management/safely-removing-joining-node.rst
--- a/docs/operating-scylla/procedures/cluster-management/safely-removing-joining-node.rst
+++ b/docs/operating-scylla/procedures/cluster-management/safely-removing-joining-node.rst
@@ -4,7 +4,7 @@ Safely Remove a Joining Node

Sometimes when adding a node to the cluster, it gets stuck in a JOINING state (UJ) and never completes the process to an Up-Normal (UN) state. The only solution is to remove the node. As long as the node did not join the cluster, meaning it never went into UN state, you can stop this node, clean its data, and try again.

-1. Run the :doc:`nodetool drain </operating-scylla/nodetool-commands/drain>` command (Scylla stops listening to its connections from the client and other nodes).
+1. Run the :doc:`nodetool drain </operating-scylla/nodetool-commands/drain>` command (ScyllaDB stops listening to its connections from the client and other nodes).

2. Stop the node

diff --git a/docs/operating-scylla/procedures/cluster-management/scale-up-cluster.rst b/docs/operating-scylla/procedures/cluster-management/scale-up-cluster.rst
--- a/docs/operating-scylla/procedures/cluster-management/scale-up-cluster.rst
+++ b/docs/operating-scylla/procedures/cluster-management/scale-up-cluster.rst
@@ -4,7 +4,7 @@ Upscale a Cluster

Upcaling your cluster involves moving the cluster to a larger instance. With this procedure, it can be done without downtime.

-Scylla was designed with big servers and multi-cores in mind. In most cases, it is better to run a smaller cluster on a bigger machine instance than a larger cluster on a small machine instance.
+ScyllaDB was designed with big servers and multi-cores in mind. In most cases, it is better to run a smaller cluster on a bigger machine instance than a larger cluster on a small machine instance.
However, there may be cases where you started with a small cluster, and you now you want to upscale.

There are a few alternatives to do this:
@@ -44,10 +44,10 @@ to avoid interrupting the availability of your application:
.. include:: /rst_include/scylla-commands-stop-index.rst

#. Add cores
-#. Run ``scylla_setup`` to set Scylla to the new HW configuration.
+#. Run ``scylla_setup`` to set ScyllaDB to the new HW configuration.
#. Start the service

.. include:: /rst_include/scylla-commands-start-index.rst


-.. note:: Updating the number of cores will cause Scylla to reshard the SSTables to match the new core number. This is done by compacting all of the data on disk at startup.
+.. note:: Updating the number of cores will cause ScyllaDB to reshard the SSTables to match the new core number. This is done by compacting all of the data on disk at startup.
diff --git a/docs/operating-scylla/procedures/config-change/index.rst b/docs/operating-scylla/procedures/config-change/index.rst
--- a/docs/operating-scylla/procedures/config-change/index.rst
+++ b/docs/operating-scylla/procedures/config-change/index.rst
@@ -1,5 +1,5 @@
-Scylla Configuration Procedures
-===============================
+ScyllaDB Configuration Procedures
+=================================


.. toctree::
@@ -15,11 +15,11 @@ Scylla Configuration Procedures
<div class="panel callout radius animated">
<div class="row">
<div class="medium-3 columns">
- <h5 id="getting-started">Scylla Configuration Procedures</h5>
+ <h5 id="getting-started">ScyllaDB Configuration Procedures</h5>
</div>
<div class="medium-9 columns">

-Procedures to change Scylla Configuration settings.
+Procedures to change ScyllaDB Configuration settings.

* :doc:`How to Switch Snitches </operating-scylla/procedures/config-change/switch-snitch/>`

diff --git a/docs/operating-scylla/procedures/config-change/rolling-restart.rst b/docs/operating-scylla/procedures/config-change/rolling-restart.rst
--- a/docs/operating-scylla/procedures/config-change/rolling-restart.rst
+++ b/docs/operating-scylla/procedures/config-change/rolling-restart.rst
@@ -11,18 +11,18 @@ This is a general procedure that describes how to perform a rolling restart. You
Procedure
---------

-1. Run :doc:`nodetool drain </operating-scylla/nodetool-commands/drain/>` command (Scylla stops listening to its connections from the client and other nodes).
+1. Run :doc:`nodetool drain </operating-scylla/nodetool-commands/drain/>` command (ScyllaDB stops listening to its connections from the client and other nodes).

-2. Stop the Scylla node.
+2. Stop the ScyllaDB node.

.. include:: /rst_include/scylla-commands-stop-index.rst

3. Update the relevant configuration file, for example, scylla.yaml the file can be found under ``/etc/scylla/``.

-4. Start the Scylla node.
+4. Start the ScyllaDB node.

.. include:: /rst_include/scylla-commands-start-index.rst

-5. Verify the node is up and has returned to the Scylla cluster using :doc:`nodetool status </operating-scylla/nodetool-commands/status/>`.
+5. Verify the node is up and has returned to the ScyllaDB cluster using :doc:`nodetool status </operating-scylla/nodetool-commands/status/>`.

6. Repeat this procedure for all the relevant nodes in the cluster.
diff --git a/docs/operating-scylla/procedures/config-change/switch-snitch.rst b/docs/operating-scylla/procedures/config-change/switch-snitch.rst
--- a/docs/operating-scylla/procedures/config-change/switch-snitch.rst
+++ b/docs/operating-scylla/procedures/config-change/switch-snitch.rst
@@ -13,7 +13,7 @@ How to Switch Snitches

This procedure describes the steps that need to be done when switching from one type of snitch to another.
Such a scenario can be when increasing the cluster and adding more data-centers in different locations.
-Snitches are responsible for specifying how Scylla distributes the replicas. The procedure is dependent on any changes in the cluster topology.
+Snitches are responsible for specifying how ScyllaDB distributes the replicas. The procedure is dependent on any changes in the cluster topology.

**Note** - Switching a snitch requires a full cluster shutdown, so It is highly recommended to choose the :doc:`right snitch </operating-scylla/system-configuration/snitch>` for your needs at the cluster setup phase.

diff --git a/docs/operating-scylla/procedures/index.rst b/docs/operating-scylla/procedures/index.rst
--- a/docs/operating-scylla/procedures/index.rst
+++ b/docs/operating-scylla/procedures/index.rst
@@ -9,8 +9,8 @@ Procedures
Change Configuration <config-change/index>
Maintenance <maintenance/index>
Best Practices <tips/index>
- Benchmarking Scylla </operating-scylla/benchmarking-scylla>
- Migrate from Cassandra to Scylla <cassandra-to-scylla-migration-process>
+ Benchmarking ScyllaDB </operating-scylla/benchmarking-scylla>
+ Migrate from Cassandra to ScyllaDB <cassandra-to-scylla-migration-process>
Disable Housekeeping </getting-started/installation-common/disable-housekeeping>


@@ -24,18 +24,18 @@ Procedures
</div>
<div class="medium-9 columns">

-Procedures to create, out-scale, down-scale, and backup Scylla clusters
+Procedures to create, out-scale, down-scale, and backup ScyllaDB clusters

* :doc:`Cluster management procedures </operating-scylla/procedures/cluster-management/index>`
* :doc:`Backup & Restore procedures </operating-scylla/procedures/backup-restore/index>`
* :doc:`Procedures to change configuration </operating-scylla/procedures/config-change/index>`
* :doc:`Maintenance Procedures </operating-scylla/procedures/maintenance/index>`
* :doc:`Best Practices </operating-scylla/procedures/tips/index>`
-* :doc:`Benchmarking Scylla </operating-scylla/benchmarking-scylla>`
-* :doc:`Migrate from Cassandra to Scylla </operating-scylla/procedures/cassandra-to-scylla-migration-process>`
-* :doc:`Disable Scylla Housekeeping </getting-started/installation-common/disable-housekeeping>`
+* :doc:`Benchmarking ScyllaDB </operating-scylla/benchmarking-scylla>`
+* :doc:`Migrate from Cassandra to ScyllaDB </operating-scylla/procedures/cassandra-to-scylla-migration-process>`
+* :doc:`Disable ScyllaDB Housekeeping </getting-started/installation-common/disable-housekeeping>`
* :doc:`How to Change Log Level in Runtime </troubleshooting/log-level/>`
-* For training material and hands-on examples also check out the `Cluster Management Repair and Scylla Manager lesson <https://university.scylladb.com/courses/scylla-operations/lessons/cluster-management-repair-and-scylla-manager/>`_ on Scylla University.
+* For training material and hands-on examples also check out the `Cluster Management Repair and ScyllaDB Manager lesson <https://university.scylladb.com/courses/scylla-operations/lessons/cluster-management-repair-and-scylla-manager/>`_ on ScyllaDB University.

.. raw:: html

diff --git a/docs/operating-scylla/procedures/maintenance/index.rst b/docs/operating-scylla/procedures/maintenance/index.rst
--- a/docs/operating-scylla/procedures/maintenance/index.rst
+++ b/docs/operating-scylla/procedures/maintenance/index.rst
@@ -1,5 +1,5 @@
-Scylla Maintenance Procedures
-=============================
+ScyllaDB Maintenance Procedures
+===============================

.. toctree::
:hidden:
@@ -12,11 +12,11 @@ Scylla Maintenance Procedures
<div class="panel callout radius animated">
<div class="row">
<div class="medium-3 columns">
- <h5 id="getting-started">Scylla Maintenance Procedures</h5>
+ <h5 id="getting-started">ScyllaDB Maintenance Procedures</h5>
</div>
<div class="medium-9 columns">

-Scylla Maintenance Procedures
+ScyllaDB Maintenance Procedures

* :doc:`Repair </operating-scylla/procedures/maintenance/repair>`

diff --git a/docs/operating-scylla/procedures/maintenance/repair.rst b/docs/operating-scylla/procedures/maintenance/repair.rst
--- a/docs/operating-scylla/procedures/maintenance/repair.rst
+++ b/docs/operating-scylla/procedures/maintenance/repair.rst
@@ -1,8 +1,8 @@
-==============
-Scylla Repair
-==============
+===============
+ScyllaDB Repair
+===============

-During the regular operation, a Scylla cluster continues to function and remains ‘always-on’ even in the face of failures such as:
+During the regular operation, a ScyllaDB cluster continues to function and remains ‘always-on’ even in the face of failures such as:

* A down node
* A network partition
@@ -11,7 +11,7 @@ During the regular operation, a Scylla cluster continues to function and remains
* Process crashes (before a flush)
* A replica that cannot write due to a lack of resources

-As long as the cluster can satisfy the required consistency level (usually quorum), availability and consistency will be maintained. However, in order to automatically mitigate data inconsistency (entropy), Scylla uses three processes:
+As long as the cluster can satisfy the required consistency level (usually quorum), availability and consistency will be maintained. However, in order to automatically mitigate data inconsistency (entropy), ScyllaDB uses three processes:

* :doc:`Hinted Handoff </architecture/anti-entropy/hinted-handoff>`
* :doc:`Read Repair </architecture/anti-entropy/read-repair>`
@@ -23,9 +23,9 @@ Repair Overview

Data stored on nodes may become inconsistent with other replicas over time. For this reason, repairs are a necessary part of database maintenance.

-Scylla repair is a process that runs in the background and synchronizes the data between nodes so that all the replicas hold the same data.
+ScyllaDB repair is a process that runs in the background and synchronizes the data between nodes so that all the replicas hold the same data.
Running repairs is necessary to ensure that data on a given node is consistent with the other nodes in the cluster.
-You can manually run the ``nodetool repair`` command or schedule repair with `Scylla Manager <https://manager.docs.scylladb.com/stable/repair>`_,
+You can manually run the ``nodetool repair`` command or schedule repair with `ScyllaDB Manager <https://manager.docs.scylladb.com/stable/repair>`_,
which can run repairs for you.

.. note:: Run the :doc:`nodetool repair </operating-scylla/nodetool-commands/repair/>` command regularly. If you delete data frequently, it should be more often than the value of ``gc_grace_seconds`` (by default: 10 days), for example, every week. Use the **nodetool repair -pr** on each node in the cluster, sequentially.
@@ -39,9 +39,9 @@ Row-level Repair

ScyllaDB uses row-level repair.

-Row-level repair improves Scylla in two ways:
+Row-level repair improves ScyllaDB in two ways:

-* Minimizes data transfer. With row-level repair, Scylla calculates the checksum for each row and uses set reconciliation algorithms to find the mismatches between nodes. As a result, only the mismatched rows are exchanged, which eliminates unnecessary data transmission over the network.
+* Minimizes data transfer. With row-level repair, ScyllaDB calculates the checksum for each row and uses set reconciliation algorithms to find the mismatches between nodes. As a result, only the mismatched rows are exchanged, which eliminates unnecessary data transmission over the network.

* Minimize disk reads by :

@@ -51,8 +51,8 @@ Row-level repair improves Scylla in two ways:

See also

-* `Scylla Manager documentation <https://manager.docs.scylladb.com/>`_
+* `ScyllaDB Manager documentation <https://manager.docs.scylladb.com/>`_

-* `Blog: Scylla Open Source 3.1: Efficiently Maintaining Consistency with Row-Level Repair <https://www.scylladb.com/2019/08/13/scylla-open-source-3-1-efficiently-maintaining-consistency-with-row-level-repair/>`_
+* `Blog: ScyllaDB Open Source 3.1: Efficiently Maintaining Consistency with Row-Level Repair <https://www.scylladb.com/2019/08/13/scylla-open-source-3-1-efficiently-maintaining-consistency-with-row-level-repair/>`_


diff --git a/docs/operating-scylla/procedures/tips/avoid-node-mismanagement.rst b/docs/operating-scylla/procedures/tips/avoid-node-mismanagement.rst
--- a/docs/operating-scylla/procedures/tips/avoid-node-mismanagement.rst
+++ b/docs/operating-scylla/procedures/tips/avoid-node-mismanagement.rst
@@ -29,7 +29,7 @@ Then, node4 will be added to the cluster **without streaming any data**.

**Lesson Learned** - To prevent new nodes from not bootstrapping, always configure the seed node (contact point) as an existing node in the cluster.

-**Procedure to Use** - :doc:`Add a New Node Into an Existing Scylla Cluster </operating-scylla/procedures/cluster-management/add-node-to-cluster/>`
+**Procedure to Use** - :doc:`Add a New Node Into an Existing ScyllaDB Cluster </operating-scylla/procedures/cluster-management/add-node-to-cluster/>`

Node Removal Error
------------------
@@ -48,7 +48,7 @@ Node3 is a dead node. If you do the following:

**Lesson Learned** - Never reinstate a node that was removed.

-**Procedure to Use** - :doc:`Remove a Node from a Scylla Cluster (Down Scale) </operating-scylla/procedures/cluster-management/remove-node/>`
+**Procedure to Use** - :doc:`Remove a Node from a ScyllaDB Cluster (Down Scale) </operating-scylla/procedures/cluster-management/remove-node/>`

Decommission Error
------------------
@@ -68,7 +68,7 @@ Node2 is down. You login node3. You run ``nodetool decommission`` to remove n3 f
**Lesson Learned** - It is best to fix dead nodes before a ``nodetool decommission`` operation so that every node knows which nodes are decommissioned.
If there is no way to fix the dead node and decommission is performed without it, do not bring that dead node.

-**Procedure to Use** - :doc:`Replace a Dead Node in a Scylla Cluster </operating-scylla/procedures/cluster-management/replace-dead-node/>`
+**Procedure to Use** - :doc:`Replace a Dead Node in a ScyllaDB Cluster </operating-scylla/procedures/cluster-management/replace-dead-node/>`

Node Replacement Error
----------------------
@@ -83,4 +83,4 @@ Node3 is dead. If you add node4 to replace node3 with the same IP address as nod

**Lesson Learned** - Never reinstate a node that was removed.

-**Procedure to Use** - :doc:`Replace a Dead Node in a Scylla Cluster </operating-scylla/procedures/cluster-management/replace-dead-node/>`
+**Procedure to Use** - :doc:`Replace a Dead Node in a ScyllaDB Cluster </operating-scylla/procedures/cluster-management/replace-dead-node/>`
diff --git a/docs/operating-scylla/procedures/tips/benchmark-tips.rst b/docs/operating-scylla/procedures/tips/benchmark-tips.rst
--- a/docs/operating-scylla/procedures/tips/benchmark-tips.rst
+++ b/docs/operating-scylla/procedures/tips/benchmark-tips.rst
@@ -1,9 +1,9 @@
-=============================
-Maximizing Scylla Performance
-=============================
+===============================
+Maximizing ScyllaDB Performance
+===============================

-The purpose of this guide is to provide an overview of the best practices for maximizing the performance of Scylla, the next-generation NoSQL database.
-Even though Scylla auto-tunes for optimal performance, users still need to apply best practices in order to get the most out of their Scylla deployments.
+The purpose of this guide is to provide an overview of the best practices for maximizing the performance of ScyllaDB, the next-generation NoSQL database.
+Even though ScyllaDB auto-tunes for optimal performance, users still need to apply best practices in order to get the most out of their ScyllaDB deployments.



@@ -12,45 +12,45 @@ Performance Tips Summary
If you are not planning to read this document fully, then here are the most important parts of this guide:

* use the best hardware you can reasonably afford
-* install Scylla Monitoring Stack
+* install ScyllaDB Monitoring Stack
* run scylla_setup script
* use Cassandra stress test
* expect to get at least 12.5K operations per second (OPS) per physical core for simple operations on selected hardware

-Scylla Design Advantages
-------------------------
+ScyllaDB Design Advantages
+--------------------------

-Scylla is different from any other NoSQL database. It achieves the highest levels of performance and takes full control of the hardware by utilizing all of the server cores in order to provide strict SLAs for low-latency operations.
-If you run Scylla in an over-committed environment, performance won't just be linearly slower &emdash; it will tank completely.
+ScyllaDB is different from any other NoSQL database. It achieves the highest levels of performance and takes full control of the hardware by utilizing all of the server cores in order to provide strict SLAs for low-latency operations.
+If you run ScyllaDB in an over-committed environment, performance won't just be linearly slower &emdash; it will tank completely.

-This is because Scylla has a reactor design that runs on all the (configured) cores and a scheduler that assumes a 0.5 ms tick.
-Scylla does everything it can to control queues in userspace and not in the OS/drives.
+This is because ScyllaDB has a reactor design that runs on all the (configured) cores and a scheduler that assumes a 0.5 ms tick.
+ScyllaDB does everything it can to control queues in userspace and not in the OS/drives.
Thus, it assumes the bandwidth that was measured by ``scylla_setup``.

-It is not that difficult to get the best performance out of Scylla. Mostly, it is automatically tuned as long as you do not work against the system.
-The remainder of this document contains the best practices to follow to make sure that Scylla keeps tuning itself and that your performance has maximum results.
+It is not that difficult to get the best performance out of ScyllaDB. Mostly, it is automatically tuned as long as you do not work against the system.
+The remainder of this document contains the best practices to follow to make sure that ScyllaDB keeps tuning itself and that your performance has maximum results.

-Install Scylla Monitoring Stack
--------------------------------
+Install ScyllaDB Monitoring Stack
+---------------------------------

-Install and use the `Scylla Monitoring Stack <http://monitoring.docs.scylladb.com/>`_; it gives excellent additional value beyond performance.
-If you don’t know what your bottleneck is, you have not configured your system correctly. The Scylla monitoring stack dashboards will help you sort this out.
+Install and use the `ScyllaDB Monitoring Stack <http://monitoring.docs.scylladb.com/>`_; it gives excellent additional value beyond performance.
+If you don’t know what your bottleneck is, you have not configured your system correctly. The ScyllaDB monitoring stack dashboards will help you sort this out.

-With the recent addition of the `Scylla Advisor <http://monitoring.docs.scylladb.com/stable/advisor.html>`_ to the Scylla Monitoring Stack, it is even easier to find potential issues.
+With the recent addition of the `ScyllaDB Advisor <http://monitoring.docs.scylladb.com/stable/advisor.html>`_ to the ScyllaDB Monitoring Stack, it is even easier to find potential issues.

-Install Scylla Manager
-----------------------
+Install ScyllaDB Manager
+------------------------

-Install and use `Scylla Manager <https://manager.docs.scylladb.com>` together with the `Scylla Monitoring Stack <http://monitoring.docs.scylladb.com/>`_.
-Scylla Manager provides automated backups and repairs of your database.
-Scylla Manager can manage multiple Scylla clusters and run cluster-wide tasks in a controlled and predictable way.
-For example, with Scylla Manager you can control the intensity of a repair, increasing it to speed up the process, or lower the intensity to ensure it minimizes impact on ongoing operations.
+Install and use `ScyllaDB Manager <https://manager.docs.scylladb.com>` together with the `ScyllaDB Monitoring Stack <http://monitoring.docs.scylladb.com/>`_.
+ScyllaDB Manager provides automated backups and repairs of your database.
+ScyllaDB Manager can manage multiple ScyllaDB clusters and run cluster-wide tasks in a controlled and predictable way.
+For example, with ScyllaDB Manager you can control the intensity of a repair, increasing it to speed up the process, or lower the intensity to ensure it minimizes impact on ongoing operations.

Run scylla_setup
----------------

-Before running Scylla, it is critical that the scylla_setup script has been executed.
-Scylla doesn't require manual optimization &emdash; it is the task of the scylla_setup script to determine the optimal configuration.
+Before running ScyllaDB, it is critical that the scylla_setup script has been executed.
+ScyllaDB doesn't require manual optimization &emdash; it is the task of the scylla_setup script to determine the optimal configuration.
But, if ``scylla_setup`` has not run, the system won’t be configured optimally. Refer to the :doc:`System Configuration </getting-started/system-configuration/>` guide for details.

Benchmarking Best Practices
@@ -129,7 +129,7 @@ Instead of rolling out custom benchmarks, use proven tools like cassandra-stress
It is very flexible and takes care of coordinated omission.
Yahoo! Cloud Serving Benchmark (YCSB) is also an option, but needs to be configured correctly to prevent coordinated omission.
TLP-stress is not recommended because it suffers from coordinated omission.
-When benchmarking make sure that cassandra-stress that is part of the Scylla distribution is used because it contains the shard aware drivers.
+When benchmarking make sure that cassandra-stress that is part of the ScyllaDB distribution is used because it contains the shard aware drivers.

Use the Same Benchmark Tool
===========================
@@ -165,13 +165,13 @@ Use Prepared Statements

Prepared statements provide better performance because: parsing is done once, token/shard aware routing and less data is sent.
Apart from performance improvements, prepared statements also increase security because it prevents CQL injection.
-Read more about `Stop Wasting Scylla’s CPU Time by Not Being Prepared <https://www.scylladb.com/2017/12/13/prepared-statements-scylla/>`_.
+Read more about `Stop Wasting ScyllaDB’s CPU Time by Not Being Prepared <https://www.scylladb.com/2017/12/13/prepared-statements-scylla/>`_.

Use Paged Queries
=================

It is best to run queries that return a small number of rows.
-However, if a query can return many rows, then the unpaged query can lead to a huge memory bubble. This will eventually cause Scylla to kill the query.
+However, if a query can return many rows, then the unpaged query can lead to a huge memory bubble. This will eventually cause ScyllaDB to kill the query.
With a paged query, the execution collects a page's worth of data and new pages are retrieved on demand, leading to smaller memory bubbles.
Read about `More Efficient Query Paging <https://www.scylladb.com/2018/07/13/efficient-query-paging/>`_.

@@ -181,7 +181,7 @@ Use Workload Prioritization
In a typical application there are operational workloads that require low latency.
Sometimes these run in parallel with analytic workloads that process high volumes of data and do not require low latency.
With workload prioritization, one can prevent that the analytic workloads lead to an unwanted high latency on operational workload.
-`Workload prioritization <https://enterprise.docs.scylladb.com/stable/using-scylla/workload-prioritization.html>`_ is only available with `Scylla Enterprise <https://enterprise.docs.scylladb.com/>`_.
+`Workload prioritization <https://enterprise.docs.scylladb.com/stable/using-scylla/workload-prioritization.html>`_ is only available with `ScyllaDB Enterprise <https://enterprise.docs.scylladb.com/>`_.

Bypass Cache
============
@@ -206,89 +206,89 @@ This is 19% of the latency compared to no batching.
Driver Guidelines
-----------------

-Use the :doc:`Scylla drivers </using-scylla/drivers/index>` that are available for Java, Python, Go, and C/C++.
+Use the :doc:`ScyllaDB drivers </using-scylla/drivers/index>` that are available for Java, Python, Go, and C/C++.
They provide much better performance than third-party drivers because they are shard aware &emdash; they can route requests to the right CPU core (shard).
When the driver starts, it gets the topology of the cluster and therefore it knows exactly which CPU core should get a request.
Our latest shard-aware drivers also improve the efficiency of our Change Data Capture (CDC) feature.
-If the Scylla drivers are not an option, make sure that at least a token aware driver is used so that one round trip is removed.
+If the ScyllaDB drivers are not an option, make sure that at least a token aware driver is used so that one round trip is removed.

-Check if there are sufficient connections created by the client, otherwise performance could suffer. The general rule is between 1-3 connections per Scylla CPU per node.
+Check if there are sufficient connections created by the client, otherwise performance could suffer. The general rule is between 1-3 connections per ScyllaDB CPU per node.

Hardware Guidelines
-------------------

CPU Core Count guidelines
=========================

-Scylla, by default, will make use of all of its CPUs cores and is designed to perform well on powerful machines and as a consequence fewer machines are needed.
+ScyllaDB, by default, will make use of all of its CPUs cores and is designed to perform well on powerful machines and as a consequence fewer machines are needed.
The recommended minimum number of CPU cores per node for operational workloads is 20.

The rule of thumb is that a single physical CPU can process 12.5 K queries per second with a payload of up to 1 KB.
If a single node should process 400K queries per second, at least 32 physical CPUs or 64 hyper-threaded cores are required.
In cloud environments hyper-threaded cores are often called virtual CPUs (vCPUs) or just cores.
So it is important to determine if a virtual CPU is the same as a physical CPU or if it is a hyper threaded CPU.

-Scylla relies on temporarily spinning the CPU instead of blocking and waiting for data to arrive. This is done to reduce latency due to reduced context switching.
-The drawback is that the CPUs are 100% utilized and you might falsely conclude that Scylla can’t keep up with the load.
-Read more about :doc:`Scylla System Requirements </getting-started/system-requirements>`.
+ScyllaDB relies on temporarily spinning the CPU instead of blocking and waiting for data to arrive. This is done to reduce latency due to reduced context switching.
+The drawback is that the CPUs are 100% utilized and you might falsely conclude that ScyllaDB can’t keep up with the load.
+Read more about :doc:`ScyllaDB System Requirements </getting-started/system-requirements>`.

Memory Guidelines
=================
-During startup, Scylla claims nearly all of the available memory for itself.
+During startup, ScyllaDB claims nearly all of the available memory for itself.
This is done for caching purposes to reduce the number of I/O operations.
So the more memory available, the better the performance.

-Scylla recommends at least 2 GB of memory per core and a minimum of 16 GB of memory for a system (pick the highest value).
+ScyllaDB recommends at least 2 GB of memory per core and a minimum of 16 GB of memory for a system (pick the highest value).
This means if you have a 64 core system, you should have at least 2x64=128 GB of memory.

The max recommended ratio of storage/memory for good performance is 30:1.
So for a system with 128 GB of memory, the recommended upper bound on the storage capacity is 3.8 TB per node of data.
To store 6 TB of data per node, the minimum recommended amount of memory is 200 GB.

-Read more about :doc:`Scylla System Requirements </getting-started/system-requirements>` or :doc:`Starting Scylla in a Shared Environment </getting-started/scylla-in-a-shared-environment/>`.
+Read more about :doc:`ScyllaDB System Requirements </getting-started/system-requirements>` or :doc:`Starting ScyllaDB in a Shared Environment </getting-started/scylla-in-a-shared-environment/>`.


Storage Guidelines
==================

-Scylla utilizes the full potential of modern NVMe SSDs; so the faster drive, the better the performance.
+ScyllaDB utilizes the full potential of modern NVMe SSDs; so the faster drive, the better the performance.
If there is more than one SSD, it is recommended to use them as RAID 0 for the best performance.
-This is configured during ``scylla_setup`` and Scylla will create the RAID device automatically.
+This is configured during ``scylla_setup`` and ScyllaDB will create the RAID device automatically.
If there is limited SSD capacity, the commit log should be placed on the SSD.

The recommended file system is XFS because of its asynchronous appending write support and is the primary file system ScyllaDB is tested with.

-As SSD’s wear out over time, it is recommended to re-run the iotune tool every few months. This helps Scylla’s IO scheduler to make the best performing choices.
+As SSD’s wear out over time, it is recommended to re-run the iotune tool every few months. This helps ScyllaDB’s IO scheduler to make the best performing choices.

-Read more about :doc:`Scylla System Requirements </getting-started/system-requirements>`.
+Read more about :doc:`ScyllaDB System Requirements </getting-started/system-requirements>`.

Networking Guidelines
=====================

For operational workloads the minimum recommended network bandwidth is 10 Gbps.
The scylla_setup script takes care of optimizing the kernel parameters, IRQ handling etc.

-Read more about :ref:`Scylla Network Requirements <system-requirements-network>`.
+Read more about :ref:`ScyllaDB Network Requirements <system-requirements-network>`.

Cloud Compute Instance Recommendations
--------------------------------------

-Scylla is designed to utilize all hardware resources. Bare metal instances are preferred for best performance.
+ScyllaDB is designed to utilize all hardware resources. Bare metal instances are preferred for best performance.

-Read more about :doc:`Starting Scylla in a Shared Environment </getting-started/scylla-in-a-shared-environment/>`.
+Read more about :doc:`Starting ScyllaDB in a Shared Environment </getting-started/scylla-in-a-shared-environment/>`.

Image Guidelines
================

-Use the Scylla provided AMI on AWS EC2 or the Google Cloud Platform (CGP) image, if possible.
+Use the ScyllaDB provided AMI on AWS EC2 or the Google Cloud Platform (CGP) image, if possible.
They have already been correctly configured for use in those public cloud environments.

AWS
===

AWS EC2 i3, i3en, i4i and c5d bare metal instances are **highly recommended** because they are optimized for high I/O.

-Read more about :ref:`Scylla Supported Platforms <system-requirements-supported-platforms>`.
+Read more about :ref:`ScyllaDB Supported Platforms <system-requirements-supported-platforms>`.

If bare metal isn’t an option, use Nitro based instances and run with ‘host’ as tenancy policy to prevent the instance being shared with other VM’s.
If Nitro isn’t possible, then use instance storage over EBS.
@@ -313,15 +313,15 @@ Docker
======

When running in Docker platform, please use CPU pinning and host networking for best performance.
-Read more about `The Cost of Containerization for Your Scylla <https://www.scylladb.com/2018/08/09/cost-containerization-scylla/>`_.
+Read more about `The Cost of Containerization for Your ScyllaDB <https://www.scylladb.com/2018/08/09/cost-containerization-scylla/>`_.

Kubernetes
==========

Just as with Docker, on a Kubernetes environment CPU pinning should be used as well.
In this case the pod should be pinned to a CPU so that no sharing takes place.

-Read more about `Exploring Scylla on Kubernetes <https://www.scylladb.com/2018/03/29/scylla-kubernetes-overview/>`_.
+Read more about `Exploring ScyllaDB on Kubernetes <https://www.scylladb.com/2018/03/29/scylla-kubernetes-overview/>`_.

Data Compaction
---------------
@@ -330,7 +330,7 @@ When records get updated or deleted, the old data eventually needs to be deleted
The compaction settings can make a huge difference.

* Use the following :ref:`Compaction Strategy Matrix <CSM1>` to use the correct compaction strategy for your workload.
-* ICS is an incremental compaction strategy that combines the low space amplification of LCS with the low write amplification of STCS. It is **only** available with Scylla Enterprise.
+* ICS is an incremental compaction strategy that combines the low space amplification of LCS with the low write amplification of STCS. It is **only** available with ScyllaDB Enterprise.
* If you have time series data, the TWCS should be used.

Read more about :doc:`Compaction Strategies </architecture/compaction/compaction-strategies>`
@@ -366,5 +366,5 @@ Read more about `Maximizing Performance via Concurrency While Minimizing Timeout
Conclusion
----------

-Maximizing Scylla performance does require some effort even though Scylla will do its best to reduce the amount of configuration.
+Maximizing ScyllaDB performance does require some effort even though ScyllaDB will do its best to reduce the amount of configuration.
If the best practices are correctly applied, then most common performance problems will be prevented.
diff --git a/docs/operating-scylla/procedures/tips/best-practices-scylla-on-docker.rst b/docs/operating-scylla/procedures/tips/best-practices-scylla-on-docker.rst
--- a/docs/operating-scylla/procedures/tips/best-practices-scylla-on-docker.rst
+++ b/docs/operating-scylla/procedures/tips/best-practices-scylla-on-docker.rst
@@ -136,7 +136,7 @@ Overriding scylla.yaml with a Master File
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sometimes, it’s not possible to adjust ScyllaDB-specific settings (including non-network properties, like ``cluster_name`` ) directly from the command line when ScyllaDB is running within Docker.

-Instead, it may be necessary to incrementally override ``scylla.yaml`` settings by passing an external, master Scylla.yaml file when starting the Docker container for the node.
+Instead, it may be necessary to incrementally override ``scylla.yaml`` settings by passing an external, master ScyllaDB.yaml file when starting the Docker container for the node.

To do this, you can use the ``--volume (-v)`` command as before to specify the overriding ``.yaml`` file:

@@ -278,7 +278,7 @@ To disable developer mode:

--experimental ENABLE
---------------------
-The ``--experimental`` command line option enables Scylla's experimental mode. If no ``--experimental`` command line option is defined, ScyllaDB defaults to running with experimental mode disabled.
+The ``--experimental`` command line option enables ScyllaDB's experimental mode. If no ``--experimental`` command line option is defined, ScyllaDB defaults to running with experimental mode disabled.

**It is highly recommended to disable experimental mode for production deployments.**

diff --git a/docs/operating-scylla/procedures/tips/index.rst b/docs/operating-scylla/procedures/tips/index.rst
--- a/docs/operating-scylla/procedures/tips/index.rst
+++ b/docs/operating-scylla/procedures/tips/index.rst
@@ -1,5 +1,5 @@
-Scylla Best Practices
-=====================
+ScyllaDB Best Practices
+========================

.. toctree::
:hidden:
@@ -12,18 +12,18 @@ Scylla Best Practices


.. panel-box::
- :title: Scylla Best Practices
+ :title: ScyllaDB Best Practices
:id: "getting-started"
:class: my-panel


- Best Practices for running Scylla
+ Best Practices for running ScyllaDB

- * :doc:`Best Practices for Running Scylla on Docker <best-practices-scylla-on-docker>`
+ * :doc:`Best Practices for Running ScyllaDB on Docker <best-practices-scylla-on-docker>`
* :doc:`Production Readiness Guidelines <production-readiness>`
* :doc:`How to Avoid Node Mismanagement <avoid-node-mismanagement>`
- * :doc:`Maximizing Scylla Performance <benchmark-tips>`
- * `ScyllaDB Monitoring lesson <https://university.scylladb.com/courses/scylla-operations/lessons/scylla-monitoring/>`_ on Scylla University
+ * :doc:`Maximizing ScyllaDB Performance <benchmark-tips>`
+ * `ScyllaDB Monitoring lesson <https://university.scylladb.com/courses/scylla-operations/lessons/scylla-monitoring/>`_ on ScyllaDB University



diff --git a/docs/operating-scylla/procedures/tips/production-readiness.rst b/docs/operating-scylla/procedures/tips/production-readiness.rst
--- a/docs/operating-scylla/procedures/tips/production-readiness.rst
+++ b/docs/operating-scylla/procedures/tips/production-readiness.rst
@@ -14,8 +14,8 @@ Before You Begin
Pre-Deployment Requirements
===========================

-* :doc:`Scylla System Requirements</getting-started/system-requirements/>` - verify your instances, system, OS, etc are supported by Scylla for production machines.
-* :doc:`Scylla Getting Started </getting-started/index>`
+* :doc:`ScyllaDB System Requirements</getting-started/system-requirements/>` - verify your instances, system, OS, etc are supported by ScyllaDB for production machines.
+* :doc:`ScyllaDB Getting Started </getting-started/index>`

Choose a Compaction Strategy
============================
@@ -38,8 +38,8 @@ If you have a multi-datacenter architecture we recommend to have ``RF=3`` on eac

For additional information:

-* Read more about :doc:`Scylla Fault Tolerance </architecture/architecture-fault-tolerance/>`
-* Take a course at `Scylla University on RF <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/topic/fault-tolerance-replication-factor/>`_.
+* Read more about :doc:`ScyllaDB Fault Tolerance </architecture/architecture-fault-tolerance/>`
+* Take a course at `ScyllaDB University on RF <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/topic/fault-tolerance-replication-factor/>`_.

Consistency Levels
==================
@@ -50,9 +50,9 @@ We recommend using :code:`LOCAL_QUORUM` across **the cluster and DCs**

For additional information:

-* Refer to :doc:`Scylla Fault Tolerance </architecture/architecture-fault-tolerance/>`
+* Refer to :doc:`ScyllaDB Fault Tolerance </architecture/architecture-fault-tolerance/>`
* Watch a :doc:`Demo </architecture/console-CL-full-demo/>`
-* Take a course at `Scylla University on CL <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/topic/consistency-level/>`_
+* Take a course at `ScyllaDB University on CL <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/high-availability/topic/consistency-level/>`_

Gossip Configuration
====================
@@ -64,9 +64,9 @@ Gossip Configuration

For additional information:

- * Refer to :doc:`Gossip in Scylla </kb/gossip/>`
+ * Refer to :doc:`Gossip in ScyllaDB </kb/gossip/>`
* Follow the :doc:`How to Switch Snitches </operating-scylla/procedures/config-change/switch-snitch/>` procedure if required.
- * Take a course at `Scylla University on Gossip <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/architecture/topic/gossip/>`_
+ * Take a course at `ScyllaDB University on Gossip <https://university.scylladb.com/courses/scylla-essentials-overview/lessons/architecture/topic/gossip/>`_

#. Use the correct Data Replication strategy.

@@ -99,7 +99,7 @@ Compression
Inter-node Compression
======================

-Enable Inter-node Compression by editing the Scylla Configuration file (/etc/scylla.yaml).
+Enable Inter-node Compression by editing the ScyllaDB Configuration file (/etc/scylla.yaml).

:code:`internode_compression: all`

@@ -108,9 +108,9 @@ For additional information, see the Admin Guide :ref:`Inter-node Compression <in
Driver Compression
==================

-This refers to compressing traffic between the client and Scylla.
-Verify your client driver is using compressed traffic when connected to Scylla.
-As compression is driver settings dependent, please check your client driver manual or :doc:`Scylla Drivers </using-scylla/drivers/index>`.
+This refers to compressing traffic between the client and ScyllaDB.
+Verify your client driver is using compressed traffic when connected to ScyllaDB.
+As compression is driver settings dependent, please check your client driver manual or :doc:`ScyllaDB Drivers </using-scylla/drivers/index>`.


Connectivity
@@ -119,35 +119,35 @@ Connectivity
Drivers Settings
================

-* Use shard aware drivers wherever possible. :doc:`Scylla Drivers </using-scylla/drivers/index>` (not third-party drivers) are shard aware.
+* Use shard aware drivers wherever possible. :doc:`ScyllaDB Drivers </using-scylla/drivers/index>` (not third-party drivers) are shard aware.
* Configure connection pool - open more connections (>3 per shard) and/Or more clients. See `this blog <https://www.scylladb.com/2019/11/20/maximizing-performance-via-concurrency-while-minimizing-timeouts-in-distributed-databases/>`_.

Management
----------

-You must use both Scylla Manager and Scylla Monitor.
+You must use both ScyllaDB Manager and ScyllaDB Monitor.

-Scylla Manager
-==============
+ScyllaDB Manager
+================

-Scylla Manager enables centralized cluster administration and database
+ScyllaDB Manager enables centralized cluster administration and database
automation such as repair, backup, and health-check.

Repair
......

-Run repairs preferably once a week and run them exclusively from Scylla Manager.
+Run repairs preferably once a week and run them exclusively from ScyllaDB Manager.
Refer to `Repair a Cluster <https://manager.docs.scylladb.com/branch-2.2/repair/index.html>`_

Backup and Restore
..................

We recommend the following:

-* Run a full weekly backup from Scylla Manager
-* Run a daily backup from Scylla Manager
+* Run a full weekly backup from ScyllaDB Manager
+* Run a daily backup from ScyllaDB Manager
* Check the bucket used for restore. This can be done by performing a `restore <https://manager.docs.scylladb.com/branch-2.2/restore/index.html>`_ and making sure the data is valid. This action should be done once a month, or more frequently if needed. Ask our Support team to help you with this.
-* Save backup to a bucket supported by Scylla Manager.
+* Save backup to a bucket supported by ScyllaDB Manager.

For additional information:

@@ -201,4 +201,4 @@ Additional Topics
* :doc:`Add a Node </operating-scylla/procedures/cluster-management/add-node-to-cluster/>`
* `Repair <https://manager.docs.scylladb.com/branch-2.2/repair/index.html>`_
* :doc:`Cleanup </operating-scylla/nodetool-commands/cleanup/>`
-* Tech Talk: `How to be successful with Scylla <https://www.scylladb.com/tech-talk/how-to-be-successful-with-scylla/>`_
+* Tech Talk: `How to be successful with ScyllaDB <https://www.scylladb.com/tech-talk/how-to-be-successful-with-scylla/>`_
diff --git a/docs/operating-scylla/security/_common/security-index.rst b/docs/operating-scylla/security/_common/security-index.rst
--- a/docs/operating-scylla/security/_common/security-index.rst
+++ b/docs/operating-scylla/security/_common/security-index.rst
@@ -1,4 +1,4 @@
-* :doc:`Scylla Security Checklist </operating-scylla/security/security-checklist/>`
+* :doc:`ScyllaDB Security Checklist </operating-scylla/security/security-checklist/>`
* :doc:`Enable Authentication </operating-scylla/security/authentication/>`
* :doc:`Enable and Disable Authentication Without Downtime </operating-scylla/security/runtime-authentication/>`
* :doc:`Reset Authenticator Password </troubleshooting/password-reset/>`
diff --git a/docs/operating-scylla/security/_common/ssl-hot-reload.rst b/docs/operating-scylla/security/_common/ssl-hot-reload.rst
--- a/docs/operating-scylla/security/_common/ssl-hot-reload.rst
+++ b/docs/operating-scylla/security/_common/ssl-hot-reload.rst
@@ -2,4 +2,4 @@
Once ``internode_encryption`` or ``client_encryption_options`` is enabled
(by being set to something other than none), the SSL / TLS certificates and key files specified in scylla.yaml
will continue to be monitored and reloaded if modified on disk.
-When the files are updated, Scylla reloads them and uses them for subsequent connections.
+When the files are updated, ScyllaDB reloads them and uses them for subsequent connections.
diff --git a/docs/operating-scylla/security/authentication.rst b/docs/operating-scylla/security/authentication.rst
--- a/docs/operating-scylla/security/authentication.rst
+++ b/docs/operating-scylla/security/authentication.rst
@@ -3,25 +3,25 @@ Enable Authentication

.. scylladb_include_flag:: upgrade-note-authentication.rst

-Authentication is the process where login accounts and their passwords are verified, and the user is allowed access to the database. Authentication is done internally within Scylla and is not done with a third party. Users and passwords are created with roles using a ``CREATE ROLE`` statement. Refer to :doc:`Grant Authorization CQL Reference </operating-scylla/security/authorization>` for details.
+Authentication is the process where login accounts and their passwords are verified, and the user is allowed access to the database. Authentication is done internally within ScyllaDB and is not done with a third party. Users and passwords are created with roles using a ``CREATE ROLE`` statement. Refer to :doc:`Grant Authorization CQL Reference </operating-scylla/security/authorization>` for details.

-The procedure described below enables Authentication on the Scylla servers. It is intended to be used when you do **not** have applications running with Scylla/Cassandra drivers.
+The procedure described below enables Authentication on the ScyllaDB servers. It is intended to be used when you do **not** have applications running with ScyllaDB/Cassandra drivers.

-.. warning:: Once you enable authentication, all clients (such as applications using Scylla/Apache Cassandra drivers) will **stop working** until they are updated or reconfigured to work with authentication.
+.. warning:: Once you enable authentication, all clients (such as applications using ScyllaDB/Apache Cassandra drivers) will **stop working** until they are updated or reconfigured to work with authentication.

-If this downtime is not an option, you can follow the instructions in :doc:`Enable and Disable Authentication Without Downtime </operating-scylla/security/runtime-authentication>`, which using a transient state, allows clients to work with or without Authentication at the same time. In this state, you can update the clients (application using Scylla/Apache Cassandra drivers) one at the time. Once all the clients are using Authentication, you can enforce Authentication on all Scylla nodes as well.
+If this downtime is not an option, you can follow the instructions in :doc:`Enable and Disable Authentication Without Downtime </operating-scylla/security/runtime-authentication>`, which using a transient state, allows clients to work with or without Authentication at the same time. In this state, you can update the clients (application using ScyllaDB/Apache Cassandra drivers) one at the time. Once all the clients are using Authentication, you can enforce Authentication on all ScyllaDB nodes as well.

Procedure
----------

-#. For each Scylla node in the cluster, edit the ``/etc/scylla/scylla.yaml`` file to change the ``authenticator`` parameter from ``AllowAllAuthenticator`` to ``PasswordAuthenticator``.
+#. For each ScyllaDB node in the cluster, edit the ``/etc/scylla/scylla.yaml`` file to change the ``authenticator`` parameter from ``AllowAllAuthenticator`` to ``PasswordAuthenticator``.

.. code-block:: yaml

authenticator: PasswordAuthenticator


-#. Restart Scylla.
+#. Restart ScyllaDB.

.. include:: /rst_include/scylla-commands-restart-index.rst

diff --git a/docs/operating-scylla/security/authorization.rst b/docs/operating-scylla/security/authorization.rst
--- a/docs/operating-scylla/security/authorization.rst
+++ b/docs/operating-scylla/security/authorization.rst
@@ -5,7 +5,7 @@
Grant Authorization CQL Reference
---------------------------------

-Authorization is the process by where users are granted permissions, which entitle them to access, or permission to change data on specific keyspaces, tables or an entire datacenter. Authorization for Scylla is done internally within Scylla and is not done with a third-party such as LDAP or OAuth. Granting permissions to users requires the use of a role such as a Database Administrator as well as :doc:`enabling the CassandraAuthorizer </operating-scylla/security/enable-authorization>`. It also requires a user who has been :doc:`authenticated </operating-scylla/security/authentication>`.
+Authorization is the process by where users are granted permissions, which entitle them to access, or permission to change data on specific keyspaces, tables or an entire datacenter. Authorization for ScyllaDB is done internally within ScyllaDB and is not done with a third-party such as LDAP or OAuth. Granting permissions to users requires the use of a role such as a Database Administrator as well as :doc:`enabling the CassandraAuthorizer </operating-scylla/security/enable-authorization>`. It also requires a user who has been :doc:`authenticated </operating-scylla/security/authentication>`.



@@ -66,7 +66,7 @@ If a role has the ``LOGIN`` privilege, clients may identify as that role when co
connection, the client will acquire any roles and privileges granted to that role.

Only a client with the ``CREATE`` permission on the database roles resource may issue ``CREATE ROLE`` requests (see
-the :ref:`relevant section <cql-permissions>` below) unless the client is a ``SUPERUSER``. Role management in Scylla
+the :ref:`relevant section <cql-permissions>` below) unless the client is a ``SUPERUSER``. Role management in ScyllaDB
is pluggable, and custom implementations may support only a subset of the listed options.

Role names should be quoted if they contain non-alphanumeric characters.
@@ -217,8 +217,8 @@ lists all roles directly granted to ``bob`` without including any of the transit
Users
^^^^^

-Prior to the introduction of roles in Scylla 2.2, authentication and authorization were based around the concept of a
-``USER``. For backward compatibility, this syntax has been preserved. From Scylla 2.2 and onward, it is recommended to use :ref:`roles <db-roles>`.
+Prior to the introduction of roles in ScyllaDB 2.2, authentication and authorization were based around the concept of a
+``USER``. For backward compatibility, this syntax has been preserved. From ScyllaDB 2.2 and onward, it is recommended to use :ref:`roles <db-roles>`.

.. _create-user-statement:

@@ -307,7 +307,7 @@ Data Control
Permissions
~~~~~~~~~~~

-Permissions on resources are granted to users; there are several different types of resources in Scylla, and each type
+Permissions on resources are granted to users; there are several different types of resources in ScyllaDB, and each type
is modelled hierarchically:

- The hierarchy of Data resources, Keyspaces, and Tables has the structure ``ALL KEYSPACES`` -> ``KEYSPACE`` ->
diff --git a/docs/operating-scylla/security/client-node-encryption.rst b/docs/operating-scylla/security/client-node-encryption.rst
--- a/docs/operating-scylla/security/client-node-encryption.rst
+++ b/docs/operating-scylla/security/client-node-encryption.rst
@@ -3,27 +3,27 @@ Encryption: Data in Transit Client to Node

Follow the procedures below to enable a client to node encryption.
Once enabled, all communication between the client and the node is transmitted over TLS/SSL.
-The libraries used by Scylla for OpenSSL are FIPS 140-2 certified.
+The libraries used by ScyllaDB for OpenSSL are FIPS 140-2 certified.

Workflow
^^^^^^^^

-Each Scylla node needs to be enabled for TLS/SSL encryption separately. Repeat this procedure for each node.
+Each ScyllaDB node needs to be enabled for TLS/SSL encryption separately. Repeat this procedure for each node.

#. `Configure the Node`_
#. `Validate the Clients`_

Configure the Node
^^^^^^^^^^^^^^^^^^
-This procedure is to be done on **every** Scylla node, one node at a time (one by one).
+This procedure is to be done on **every** ScyllaDB node, one node at a time (one by one).

.. note:: If you are working on a new cluster skip steps 1 & 2.

**Procedure**

#. Run ``nodetool drain``.

-#. Stop Scylla.
+#. Stop ScyllaDB.

.. include:: /rst_include/scylla-commands-stop-index.rst

@@ -34,7 +34,7 @@ This procedure is to be done on **every** Scylla node, one node at a time (one b
* ``enabled`` (default - false)
* ``certificate`` - A PEM format certificate, either self-signed, or provided by a certificate authority (CA).
* ``keyfile`` - The corresponding PEM format key for the certificate
- * ``truststore`` - Optional path to a PEM format certificate store holding the trusted CA certificates. If not provided, Scylla will attempt to use the system truststore to authenticate certificates.
+ * ``truststore`` - Optional path to a PEM format certificate store holding the trusted CA certificates. If not provided, ScyllaDB will attempt to use the system truststore to authenticate certificates.

.. note:: If using a self-signed certificate, the "truststore" parameter needs to be set to a PEM format container with the private authority.

@@ -53,7 +53,7 @@ This procedure is to be done on **every** Scylla node, one node at a time (one b
require_client_auth: ...
priority_string: SECURE128:-VERS-TLS1.0:-VERS-TLS1.1

-#. Start Scylla:
+#. Start ScyllaDB:

.. include:: /rst_include/scylla-commands-start-index.rst

@@ -124,7 +124,7 @@ For Complete instructions, see :doc:`Generate a cqlshrc File <gen-cqlsh-file>`

.. note:: when running cassandra-stress you may encounter an exception, if some nodes are still not in client to node SSL encrypted mode, yet the cassandra-stress will continue to run and connect only to the nodes it can.

- .. When using Scylla v1.6.x or lower you will need a dummy keystore in the default (conf/.keystore) location with password "cassandra" to run. The contents is irrelevant. Also, it only pertains to cassandra-stress. It has no impact/relation to using the normal java driver connection or cqlsh.
+ .. When using ScyllaDB v1.6.x or lower you will need a dummy keystore in the default (conf/.keystore) location with password "cassandra" to run. The contents is irrelevant. Also, it only pertains to cassandra-stress. It has no impact/relation to using the normal java driver connection or cqlsh.

#. Enable encryption on the client application.

diff --git a/docs/operating-scylla/security/enable-authorization.rst b/docs/operating-scylla/security/enable-authorization.rst
--- a/docs/operating-scylla/security/enable-authorization.rst
+++ b/docs/operating-scylla/security/enable-authorization.rst
@@ -3,19 +3,19 @@ Enable Authorization
====================


-Authorization is the process by where users are granted permissions, which entitle them to access or change data on specific keyspaces, tables, or an entire datacenter. Authorization for Scylla is done internally within Scylla and is not done with a third party such as LDAP or OAuth. Granting permissions to users requires the use of a role such as Database Administrator and requires a user who has been :doc:`authenticated </operating-scylla/security/authentication>`.
+Authorization is the process by where users are granted permissions, which entitle them to access or change data on specific keyspaces, tables, or an entire datacenter. Authorization for ScyllaDB is done internally within ScyllaDB and is not done with a third party such as LDAP or OAuth. Granting permissions to users requires the use of a role such as Database Administrator and requires a user who has been :doc:`authenticated </operating-scylla/security/authentication>`.

-Authorization is enabled using the authorizer setting in scylla.yaml. Scylla has two authorizers available:
+Authorization is enabled using the authorizer setting in scylla.yaml. ScyllaDB has two authorizers available:

* ``AllowAllAuthorizer`` (default setting) - which performs no checking and so effectively grants all permissions to all roles. This must be used if AllowAllAuthenticator is the configured :doc:`authenticator </operating-scylla/security/authentication>`.

-* ``CassandraAuthorizer`` - which implements permission management functionality and stores its data in Scylla system tables.
+* ``CassandraAuthorizer`` - which implements permission management functionality and stores its data in ScyllaDB system tables.


.. note:: Once Authorization is enabled, **all users must**:

* Have :ref:`roles <roles>` and permissions (set by a DBA with :ref:`superuser <superuser>` credentials) configured.
- * Use a user/password to :ref:`connect <access>` to Scylla.
+ * Use a user/password to :ref:`connect <access>` to ScyllaDB.

Enabling Authorization
----------------------
@@ -100,7 +100,7 @@ In this example, you are creating a user (``db_user``) who can access with passw
Clients Resume Access with New Permissions
..........................................

-1. Restart Scylla. As each node restarts and clients reconnect, the enforcement of the granted permissions will begin.
+1. Restart ScyllaDB. As each node restarts and clients reconnect, the enforcement of the granted permissions will begin.

.. include:: /rst_include/scylla-commands-restart-index.rst

diff --git a/docs/operating-scylla/security/gen-cqlsh-file.rst b/docs/operating-scylla/security/gen-cqlsh-file.rst
--- a/docs/operating-scylla/security/gen-cqlsh-file.rst
+++ b/docs/operating-scylla/security/gen-cqlsh-file.rst
@@ -2,7 +2,7 @@
Generate a cqlshrc File
=======================

-Making connections to a Scylla cluster that uses SSL can be a tricky process, but it doesn't diminish the importance of properly securing your client connections with SSL. This is especially needed when you are connecting to your cluster via the Internet or an untrusted network.
+Making connections to a ScyllaDB cluster that uses SSL can be a tricky process, but it doesn't diminish the importance of properly securing your client connections with SSL. This is especially needed when you are connecting to your cluster via the Internet or an untrusted network.

Prerequisites
--------------
diff --git a/docs/operating-scylla/security/generate-certificate.rst b/docs/operating-scylla/security/generate-certificate.rst
--- a/docs/operating-scylla/security/generate-certificate.rst
+++ b/docs/operating-scylla/security/generate-certificate.rst
@@ -94,9 +94,9 @@ As a result, we should now have:
* :code:`db.crt` - PEM format certificate for the `db.key` signed by the `cadb.pem` and used by database node.
* :code:`cadb.pem` - PEM format signing identity that can be used as a trust store. Use it to sign client certificates that will connect to the database nodes.

-Place the files in a directory of your choice and make sure you set permissions so your Scylla instance can read them. Then update the server/client configuration to reference them.
+Place the files in a directory of your choice and make sure you set permissions so your ScyllaDB instance can read them. Then update the server/client configuration to reference them.

-When restarting Scylla with the new configuration, you should see the following messages in the log:
+When restarting ScyllaDB with the new configuration, you should see the following messages in the log:

When node-to-node encryption is active:

diff --git a/docs/operating-scylla/security/index.rst b/docs/operating-scylla/security/index.rst
--- a/docs/operating-scylla/security/index.rst
+++ b/docs/operating-scylla/security/index.rst
@@ -52,4 +52,4 @@ Security
* :doc:`Generating a self-signed Certificate Chain Using openssl </operating-scylla/security/generate-certificate/>`
* `Encryption at Rest <https://enterprise.docs.scylladb.com/stable/operating-scylla/security/encryption-at-rest.html>`_ available in `ScyllaDB Enterprise <https://enterprise.docs.scylladb.com/>`_

-Also check out the `Security Features lesson <https://university.scylladb.com/courses/scylla-operations/lessons/security-features/topic/security-features/>`_ on Scylla University.
+Also check out the `Security Features lesson <https://university.scylladb.com/courses/scylla-operations/lessons/security-features/topic/security-features/>`_ on ScyllaDB University.
diff --git a/docs/operating-scylla/security/node-node-encryption.rst b/docs/operating-scylla/security/node-node-encryption.rst
--- a/docs/operating-scylla/security/node-node-encryption.rst
+++ b/docs/operating-scylla/security/node-node-encryption.rst
@@ -4,7 +4,7 @@ Encryption: Data in Transit Node to Node
Communication between all or some nodes can be encrypted. The controlling parameter is :code:`server_encryption_options`.

Once enabled, all communication between the nodes is transmitted over TLS/SSL.
-The libraries used by Scylla for OpenSSL are FIPS 140-2 certified.
+The libraries used by ScyllaDB for OpenSSL are FIPS 140-2 certified.

To build a self-signed certificate chain, see :doc:`generating a self-signed certificate chain using openssl </operating-scylla/security/generate-certificate/>`.

@@ -23,7 +23,7 @@ To build a self-signed certificate chain, see :doc:`generating a self-signed cer

* ``certificate`` - A PEM format certificate, either self-signed, or provided by a certificate authority (CA).
* ``keyfile`` - The corresponding PEM format key for the certificate.
- * ``truststore`` - Optional path to a PEM format certificate store of trusted CAs. If not provided, Scylla will attempt to use the system trust store to authenticate certificates.
+ * ``truststore`` - Optional path to a PEM format certificate store of trusted CAs. If not provided, ScyllaDB will attempt to use the system trust store to authenticate certificates.
* ``certficate_revocation_list`` - The path to a PEM-encoded certificate revocation list (CRL) - a list of issued certificates that have been revoked before their expiration date.
* ``require_client_auth`` - Set to ``True`` to require client side authorization. ``False`` by default.
* ``priority_string`` - Specifies session's handshake algorithms and options to use. By default there are none.
@@ -41,7 +41,7 @@ To build a self-signed certificate chain, see :doc:`generating a self-signed cer
certficate_revocation_list: <path to a PEM-encoded CRL file> (optional)


-#. Restart Scylla node to apply the changes.
+#. Restart ScyllaDB node to apply the changes.

.. include:: /rst_include/scylla-commands-restart-index.rst

diff --git a/docs/operating-scylla/security/rbac-usecase.rst b/docs/operating-scylla/security/rbac-usecase.rst
--- a/docs/operating-scylla/security/rbac-usecase.rst
+++ b/docs/operating-scylla/security/rbac-usecase.rst
@@ -9,7 +9,7 @@ Role Based Access Control (RBAC) is a method of reducing lists of authorized use

Roles vs Users
--------------
-Roles supersede users and generalize them. In addition to doing with `roles` everything that you could previously do with `users` in older versions of Scylla, roles can be granted to other roles. If a role `developer` is granted to a role `manager`, then all permissions of the `developer` are granted to the `manager`.
+Roles supersede users and generalize them. In addition to doing with `roles` everything that you could previously do with `users` in older versions of ScyllaDB, roles can be granted to other roles. If a role `developer` is granted to a role `manager`, then all permissions of the `developer` are granted to the `manager`.

In order to distinguish roles which correspond uniquely to an individual person and roles which are representative of a group, any role that can login is a user. Within that framework, you can conclude that all users are roles, but not all roles are users.

@@ -47,7 +47,7 @@ Use case
--------

This is a use case that is given as an example. You should modify the commands to your organization’s requirements.
-A health club has opened, and they have an application that supports their clients, which is using Scylla as the database backend. The following groups would need to be given permissions:
+A health club has opened, and they have an application that supports their clients, which is using ScyllaDB as the database backend. The following groups would need to be given permissions:
The office staff can add new customers and can cancel subscriptions, view all customer data, and can change classes for the trainers as well as view the trainers’ data.
Trainers can only view their schedule and can view customer data.
Customers view the class schedule.
diff --git a/docs/operating-scylla/security/runtime-authentication.rst b/docs/operating-scylla/security/runtime-authentication.rst
--- a/docs/operating-scylla/security/runtime-authentication.rst
+++ b/docs/operating-scylla/security/runtime-authentication.rst
@@ -3,14 +3,14 @@ Enable and Disable Authentication Without Downtime

.. scylladb_include_flag:: upgrade-note-runtime-authentication.rst

-Authentication is the process where login accounts and their passwords are verified, and the user is allowed access into the database. Authentication is done internally within Scylla and is not done with a third party. Users and passwords are created with :doc:`roles </operating-scylla/security/authorization>` using a ``CREATE ROLE`` statement. This procedure enables Authentication on the Scylla servers using a transit state, allowing clients to work with or without Authentication at the same time. In this state, you can update the clients (application using Scylla/Apache Cassandra drivers) one at the time. Once all the clients are using Authentication, you can enforce Authentication on all Scylla nodes as well. If you would rather perform a faster authentication procedure where all clients (application using Scylla/Apache Cassandra drivers) will stop working until they are updated to work with Authentication, refer to :doc:`Enable Authentication </operating-scylla/security/runtime-authentication>`.
+Authentication is the process where login accounts and their passwords are verified, and the user is allowed access into the database. Authentication is done internally within ScyllaDB and is not done with a third party. Users and passwords are created with :doc:`roles </operating-scylla/security/authorization>` using a ``CREATE ROLE`` statement. This procedure enables Authentication on the ScyllaDB servers using a transit state, allowing clients to work with or without Authentication at the same time. In this state, you can update the clients (application using ScyllaDB/Apache Cassandra drivers) one at the time. Once all the clients are using Authentication, you can enforce Authentication on all ScyllaDB nodes as well. If you would rather perform a faster authentication procedure where all clients (application using ScyllaDB/Apache Cassandra drivers) will stop working until they are updated to work with Authentication, refer to :doc:`Enable Authentication </operating-scylla/security/runtime-authentication>`.



Enable Authentication Without Downtime
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-This procedure allows you to enable authentication on a live Scylla cluster without downtime.
+This procedure allows you to enable authentication on a live ScyllaDB cluster without downtime.

Procedure
---------
@@ -21,7 +21,7 @@ Procedure

authenticator: com.scylladb.auth.TransitionalAuthenticator

-#. Run the :doc:`nodetool drain </operating-scylla/nodetool-commands/drain>` command (Scylla stops listening to its connections from the client and other nodes).
+#. Run the :doc:`nodetool drain </operating-scylla/nodetool-commands/drain>` command (ScyllaDB stops listening to its connections from the client and other nodes).

#. Restart the nodes one by one to apply the effect.

@@ -79,7 +79,7 @@ Procedure
Disable Authentication Without Downtime
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-This procedure allows you to disable authentication on a live Scylla cluster without downtime. Once disabled, you will have to re-enable authentication where required.
+This procedure allows you to disable authentication on a live ScyllaDB cluster without downtime. Once disabled, you will have to re-enable authentication where required.

Procedure
---------
diff --git a/docs/operating-scylla/security/saslauthd.rst b/docs/operating-scylla/security/saslauthd.rst
--- a/docs/operating-scylla/security/saslauthd.rst
+++ b/docs/operating-scylla/security/saslauthd.rst
@@ -1,10 +1,10 @@
Configure SaslauthdAuthenticator
--------------------------------

-Scylla can outsource authentication to a third-party utility named `saslauthd <https://linux.die.net/man/8/saslauthd>`_, which, in turn,supports many different authentication mechanisms.
-Scylla accomplishes this by providing a custom authenticator named SaslauthdAuthenticator.
+ScyllaDB can outsource authentication to a third-party utility named `saslauthd <https://linux.die.net/man/8/saslauthd>`_, which, in turn,supports many different authentication mechanisms.
+ScyllaDB accomplishes this by providing a custom authenticator named SaslauthdAuthenticator.
This procedure explains how to install and configure it.
-Once configured, any login to Scylla is authenticated with the SaslauthdAuthenticator.
+Once configured, any login to ScyllaDB is authenticated with the SaslauthdAuthenticator.

**Procedure**

@@ -56,8 +56,8 @@ Once configured, any login to Scylla is authenticated with the SaslauthdAuthenti
* ``authenticator: com.scylladb.auth.SaslauthdAuthenticator``
* ``saslauthd_socket_path: /path/to/the/mux``

-#. Restart the Scylla server. From now on, Scylla will authenticate all login attempts via saslauthd.
+#. Restart the ScyllaDB server. From now on, ScyllaDB will authenticate all login attempts via saslauthd.

.. include:: /rst_include/scylla-commands-restart-index.rst

-#. Create Scylla roles which **match** the same roles in the LDAP server. To create a role, refer to the :ref:`CQL Reference <cql-security>` and the :doc:`RBAC example <rbac-usecase>`.
+#. Create ScyllaDB roles which **match** the same roles in the LDAP server. To create a role, refer to the :ref:`CQL Reference <cql-security>` and the :doc:`RBAC example <rbac-usecase>`.
diff --git a/docs/operating-scylla/security/security-checklist.rst b/docs/operating-scylla/security/security-checklist.rst
--- a/docs/operating-scylla/security/security-checklist.rst
+++ b/docs/operating-scylla/security/security-checklist.rst
@@ -1,18 +1,18 @@
ScyllaDB Security Checklist
=============================
-The Scylla Security checklist is a list of security recommendations that should be implemented to protect your Scylla cluster.
+The ScyllaDB Security checklist is a list of security recommendations that should be implemented to protect your ScyllaDB cluster.


Enable Authentication
~~~~~~~~~~~~~~~~~~~~~

-:doc:`Authentication </operating-scylla/security/authentication/>` is a security step to verify the identity of a client. When enabled, Scylla requires all clients to authenticate themselves to determine their access to the cluster.
+:doc:`Authentication </operating-scylla/security/authentication/>` is a security step to verify the identity of a client. When enabled, ScyllaDB requires all clients to authenticate themselves to determine their access to the cluster.


Enable Authorization
~~~~~~~~~~~~~~~~~~~~~

-:doc:`Authorization </operating-scylla/security/enable-authorization/>` is a security step to verify the granted permissions of a client. When enabled, Scylla will check all clients for their access permissions to the cluster objects(keyspaces, tables).
+:doc:`Authorization </operating-scylla/security/enable-authorization/>` is a security step to verify the granted permissions of a client. When enabled, ScyllaDB will check all clients for their access permissions to the cluster objects(keyspaces, tables).


Role Base Access
@@ -29,7 +29,7 @@ Role-Based Access Control (:doc:`RBAC</operating-scylla/security/rbac-usecase/>`
Encryption on Transit, Client to Node and Node to Node
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Encryption on Transit protects your communication against a 3rd interception on the network connection.
-Configure Scylla to use TLS/SSL for all the connections. Use TLS/SSL to encrypt communication between Scylla nodes and client applications.
+Configure ScyllaDB to use TLS/SSL for all the connections. Use TLS/SSL to encrypt communication between ScyllaDB nodes and client applications.

.. only:: enterprise

@@ -52,15 +52,15 @@ See `Encryption at Rest <https://enterprise.docs.scylladb.com/stable/operating-s

Reduce the Network Exposure
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Ensure that Scylla runs in a trusted network environment.
-A best practice is to maintain a list of ports used by Scylla and to monitor them to ensure that only trusted clients access those network interfaces and ports.
+Ensure that ScyllaDB runs in a trusted network environment.
+A best practice is to maintain a list of ports used by ScyllaDB and to monitor them to ensure that only trusted clients access those network interfaces and ports.
The diagram below shows a single datacenter cluster deployment, with the list of ports used for each connection type. You should periodically check to make sure that only these ports are open and that they are open to relevant IPs only.
Most of the ports' settings are configurable in the scylla.yaml file.
-Also, see the list of ports used by Scylla.
+Also, see the list of ports used by ScyllaDB.

.. image:: Scylla-Ports2.png

-The Scylla ports are detailed in the table below. For Scylla Manager ports, see the `Scylla Manager Documentation <https://manager.docs.scylladb.com>`_.
+The ScyllaDB ports are detailed in the table below. For ScyllaDB Manager ports, see the `ScyllaDB Manager Documentation <https://manager.docs.scylladb.com>`_.

.. include:: /operating-scylla/_common/networking-ports.rst

@@ -72,10 +72,10 @@ Audit System Activity

Auditing is available in `ScyllaDB Enterprise <https://enterprise.docs.scylladb.com/>`_.

-Using the `auditing feature <https://enterprise.docs.scylladb.com/stable/operating-scylla/security/auditing.html>`_ allows the administrator to know “who did / looked at / changed what and when.” It also allows logging some or all the activities a user performs on the Scylla cluster.
+Using the `auditing feature <https://enterprise.docs.scylladb.com/stable/operating-scylla/security/auditing.html>`_ allows the administrator to know “who did / looked at / changed what and when.” It also allows logging some or all the activities a user performs on the ScyllaDB cluster.

General Recommendations
~~~~~~~~~~~~~~~~~~~~~~~

-* Update your cluster with the latest Scylla version.
+* Update your cluster with the latest ScyllaDB version.
* Make sure to update your Operating System, and libraries are up to date.
diff --git a/docs/rst_include/architecture-index.rst b/docs/rst_include/architecture-index.rst
--- a/docs/rst_include/architecture-index.rst
+++ b/docs/rst_include/architecture-index.rst
@@ -1 +1 @@
-:doc:`Scylla Architecture </architecture/index/>`
+:doc:`ScyllaDB Architecture </architecture/index/>`
diff --git a/docs/rst_include/configure-index.rst b/docs/rst_include/configure-index.rst
--- a/docs/rst_include/configure-index.rst
+++ b/docs/rst_include/configure-index.rst
@@ -1,5 +1,5 @@
-* :doc:`Configure Scylla</getting-started/system-configuration/>`
-* :doc:`Scylla in a Shared Environment</getting-started/scylla-in-a-shared-environment/>`
-* :doc:`Migrate to Scylla </using-scylla/migrate-scylla>` - How to migrate your current database to Scylla
-* :doc:`Integrate with Scylla </using-scylla/integrations/index>` - Integration solutions with Scylla
+* :doc:`Configure ScyllaDB</getting-started/system-configuration/>`
+* :doc:`ScyllaDB in a Shared Environment</getting-started/scylla-in-a-shared-environment/>`
+* :doc:`Migrate to ScyllaDB </using-scylla/migrate-scylla>` - How to migrate your current database to ScyllaDB
+* :doc:`Integrate with ScyllaDB </using-scylla/integrations/index>` - Integration solutions with ScyllaDB

diff --git a/docs/rst_include/migrate-index.rst b/docs/rst_include/migrate-index.rst
--- a/docs/rst_include/migrate-index.rst
+++ b/docs/rst_include/migrate-index.rst
@@ -5,11 +5,11 @@
<div class="panel callout radius animated">
<div class="row">
<div class="medium-3 columns">
- <h5 id="getting-started">Migrate to Scylla</h5>
+ <h5 id="getting-started">Migrate to ScyllaDB</h5>
</div>
<div class="medium-9 columns">

-* :doc:`Scylla Cassandra Compatibility</cassandra-compatibility/>` - Scylla 1.x is a drop-in replacement for Apache Cassandra 2.1.8, supporting both the data format (SSTable) and all relevant external interfaces
+* :doc:`ScyllaDB Cassandra Compatibility</cassandra-compatibility/>` - ScyllaDB 1.x is a drop-in replacement for Apache Cassandra 2.1.8, supporting both the data format (SSTable) and all relevant external interfaces

* SSTableloader

diff --git a/docs/rst_include/scylla-tools-index.rst b/docs/rst_include/scylla-tools-index.rst
--- a/docs/rst_include/scylla-tools-index.rst
+++ b/docs/rst_include/scylla-tools-index.rst
@@ -5,21 +5,21 @@
<div class="panel callout radius animated">
<div class="row">
<div class="medium-3 columns">
- <h5 id="getting-started">Scylla's Tools</h5>
+ <h5 id="getting-started">ScyllaDB's Tools</h5>
</div>
<div class="medium-9 columns">

-* :doc:`Nodetool Reference</nodetool>` - Scylla commands for managing Scylla node or cluster using the command-line nodetool utility
+* :doc:`Nodetool Reference</nodetool>` - ScyllaDB commands for managing ScyllaDB node or cluster using the command-line nodetool utility

* cqlsh

* Scyllatop

-* Scylla Admin Commands
+* ScyllaDB Admin Commands

-* :doc:`Scylla Logging Guide</getting-started/logging/>`
+* :doc:`ScyllaDB Logging Guide</getting-started/logging/>`

-* :doc:`Monitoring Scylla</operating-scylla/monitoring/monitoring-apis>`
+* :doc:`Monitoring ScyllaDB</operating-scylla/monitoring/monitoring-apis>`

* Tracing

diff --git a/docs/rst_include/troubleshooting-index.rst b/docs/rst_include/troubleshooting-index.rst
--- a/docs/rst_include/troubleshooting-index.rst
+++ b/docs/rst_include/troubleshooting-index.rst
@@ -1,4 +1,4 @@
-* :doc:`How to Report a Scylla Problem </troubleshooting/report-scylla-problem/>`
+* :doc:`How to Report a ScyllaDB Problem </troubleshooting/report-scylla-problem/>`

* :doc:`SSTable Corruption Problem </troubleshooting/sstable-corruption/>`

@@ -10,13 +10,13 @@

* :doc:`Toubelshoot Monitoring </operating-scylla/monitoring/index/>`

-* :doc:`Troubleshooting guide for Scylla Manager and Scylla Monitoring integration </troubleshooting/manager-monitoring-integration/>`
+* :doc:`Troubleshooting guide for ScyllaDB Manager and ScyllaDB Monitoring integration </troubleshooting/manager-monitoring-integration/>`

-* :doc:`Scylla Fails to Start Due to Wrong Ownership Problems </troubleshooting/change-ownership/>`
+* :doc:`ScyllaDB Fails to Start Due to Wrong Ownership Problems </troubleshooting/change-ownership/>`

-* :doc:`Scylla Large Partitions Table </troubleshooting/large-partition-table/>`
+* :doc:`ScyllaDB Large Partitions Table </troubleshooting/large-partition-table/>`

-* :doc:`Scylla Large Rows and Cells Tables </troubleshooting/large-rows-large-cells-tables/>`
+* :doc:`ScyllaDB Large Rows and Cells Tables </troubleshooting/large-rows-large-cells-tables/>`

* :doc:`Large Partitions Hunting </troubleshooting/debugging-large-partition/>`

@@ -32,14 +32,14 @@

* :doc:`How to Change Log Level in Runtime </troubleshooting/log-level/>`

-* :doc:`Scylla will not Start </troubleshooting/scylla-wont-start/>`
+* :doc:`ScyllaDB will not Start </troubleshooting/scylla-wont-start/>`

* :doc:`Cluster Timeouts </troubleshooting/timeouts>`

* :doc:`Time Range Queries Do Not Return Some or All of the Data </troubleshooting/time-zone>`

-* :doc:`A change in EPEL broke Scylla Python Script </troubleshooting/python-error-no-module-named-yaml>`
+* :doc:`A change in EPEL broke ScyllaDB Python Script </troubleshooting/python-error-no-module-named-yaml>`

* :doc:`Node Joined With No Data </troubleshooting/node-joined-without-any-data>`

-* :doc:`Scylla Manager is reporting REST API status of healthy nodes as down </troubleshooting/reverse-dns-sshd>`
+* :doc:`ScyllaDB Manager is reporting REST API status of healthy nodes as down </troubleshooting/reverse-dns-sshd>`
diff --git a/docs/troubleshooting/change-ownership.rst b/docs/troubleshooting/change-ownership.rst
--- a/docs/troubleshooting/change-ownership.rst
+++ b/docs/troubleshooting/change-ownership.rst
@@ -1,14 +1,14 @@
ScyllaDB Fails to Start Due to Wrong Ownership Problems
========================================================

-In cases where a Scylla node fails to start because there is improper ownership, the following steps will help.
+In cases where a ScyllaDB node fails to start because there is improper ownership, the following steps will help.

Phenomena
^^^^^^^^^

-Scylla node fails to start.
+ScyllaDB node fails to start.

-In cases where the Scylla node fails to start, check Scylla :doc:`logs </getting-started/logging/>`. If you see the following error message:
+In cases where the ScyllaDB node fails to start, check ScyllaDB :doc:`logs </getting-started/logging/>`. If you see the following error message:
Could not access ``<PATH>: Permission denied std::system_error (error system:13, Permission denied)``.

For example:
@@ -22,7 +22,7 @@ For example:
Problem
^^^^^^^

-The data directories ``/var/lib/scylla/data`` and ``/var/lib/scylla/commitlog`` exist but are not owned by the Scylla user.
+The data directories ``/var/lib/scylla/data`` and ``/var/lib/scylla/commitlog`` exist but are not owned by the ScyllaDB user.

For example:

@@ -58,11 +58,11 @@ Solution
drwxr-xr-x 2 scylla scylla 4096 Jun 18 09:37 commitlog
drwxr-xr-x 7 scylla scylla 97 Jun 18 09:37 data

-3. Start Scylla node.
+3. Start ScyllaDB node.

.. include:: /rst_include/scylla-commands-start-index.rst

-4. Verify Scylla node is working
+4. Verify ScyllaDB node is working

.. include:: /rst_include/scylla-commands-status-index.rst

diff --git a/docs/troubleshooting/clients-table.rst b/docs/troubleshooting/clients-table.rst
--- a/docs/troubleshooting/clients-table.rst
+++ b/docs/troubleshooting/clients-table.rst
@@ -1,7 +1,7 @@
Clients Table
==============

-This document describes how to work with Scylla's client table, which provides real-time information on CQL clients **currently** connected to the Scylla cluster.
+This document describes how to work with ScyllaDB's client table, which provides real-time information on CQL clients **currently** connected to the ScyllaDB cluster.

Viewing - List Active CQL connections
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -31,6 +31,6 @@ port (CK) Client's outgoing port number
------------------------------------------------ ---------------------------------------------------------------------------------
username Username - when Authentication is used
------------------------------------------------ ---------------------------------------------------------------------------------
-shard_id Scylla node shard handing the connection
+shard_id ScyllaDB node shard handing the connection
================================================ =================================================================================

diff --git a/docs/troubleshooting/copy-from-failed.rst b/docs/troubleshooting/copy-from-failed.rst
--- a/docs/troubleshooting/copy-from-failed.rst
+++ b/docs/troubleshooting/copy-from-failed.rst
@@ -1,7 +1,7 @@
CQL Command ``COPY FROM`` fails - field larger than the field limit
===================================================================

-This troubleshooting guide describes what to do when Scylla fails to import data using the CQL ``COPY FROM`` command
+This troubleshooting guide describes what to do when ScyllaDB fails to import data using the CQL ``COPY FROM`` command


Problem
diff --git a/docs/troubleshooting/debugging-large-partition.rst b/docs/troubleshooting/debugging-large-partition.rst
--- a/docs/troubleshooting/debugging-large-partition.rst
+++ b/docs/troubleshooting/debugging-large-partition.rst
@@ -10,7 +10,7 @@ What Should Make You Want To Start Looking For A Large Partition?

Any of the following:

-* Latencies on a single shard become very long (look at the "Scylla Overview Metrics" dashboard of `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_).
+* Latencies on a single shard become very long (look at the "ScyllaDB Overview Metrics" dashboard of `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_).
* Oversized allocation warning messages in the log:

.. code-block:: none
@@ -47,7 +47,7 @@ For example:
Using system tables to detect large partitions, rows, or cells
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-Large rows and large cells are listed in the ``system.large_rows`` and ``system.large_cells`` tables, respectively. See :doc:`Scylla Large Rows and Cells Tables </troubleshooting/large-rows-large-cells-tables/>` for more information.
+Large rows and large cells are listed in the ``system.large_rows`` and ``system.large_cells`` tables, respectively. See :doc:`ScyllaDB Large Rows and Cells Tables </troubleshooting/large-rows-large-cells-tables/>` for more information.


When Compaction Creates an Error
diff --git a/docs/troubleshooting/drop-table-space-up.rst b/docs/troubleshooting/drop-table-space-up.rst
--- a/docs/troubleshooting/drop-table-space-up.rst
+++ b/docs/troubleshooting/drop-table-space-up.rst
@@ -1,14 +1,14 @@
Dropped (or truncated) Table (or keyspace) and Disk Space is not Reclaimed
==========================================================================

-This troubleshooting guide describes what to do when Scylla keeps using disk space after a table or keyspaces are dropped or truncated.
+This troubleshooting guide describes what to do when ScyllaDB keeps using disk space after a table or keyspaces are dropped or truncated.

Problem
^^^^^^^

When performing a ``DROP`` or ``TRUNCATE`` operation on a table or keyspace, disk usage is not seen to be reduced.
Usually this is verified by using an external utility like the ``du`` Linux command.
-This is caused by the fact that by default, Scylla creates a snapshot of every dropped table. Space won't be reclaimed until the snapshot is dropped.
+This is caused by the fact that by default, ScyllaDB creates a snapshot of every dropped table. Space won't be reclaimed until the snapshot is dropped.

Solution
^^^^^^^^
diff --git a/docs/troubleshooting/error-messages/invalid-ssl-prot-error.rst b/docs/troubleshooting/error-messages/invalid-ssl-prot-error.rst
--- a/docs/troubleshooting/error-messages/invalid-ssl-prot-error.rst
+++ b/docs/troubleshooting/error-messages/invalid-ssl-prot-error.rst
@@ -2,9 +2,9 @@
Invalid SSL Protocol
====================

-Trying to connect cqlsh with Scylla 3.x results in **TLSv1.2 is not a valid SSL protocol** error.
-Recent Scylla versions did not allow the use of the TLSv1 protocol and yet cqlsh seems to use it by default.
-The solution is to upgrade to a more recent version of Scylla which contains the patch to fix the issue.
+Trying to connect cqlsh with ScyllaDB 3.x results in **TLSv1.2 is not a valid SSL protocol** error.
+Recent ScyllaDB versions did not allow the use of the TLSv1 protocol and yet cqlsh seems to use it by default.
+The solution is to upgrade to a more recent version of ScyllaDB which contains the patch to fix the issue.
If this is not an option, change the cqlshrc file to contain the following:

.. code-block:: yaml
diff --git a/docs/troubleshooting/error-messages/kb-fs-not-qualified-aio.rst b/docs/troubleshooting/error-messages/kb-fs-not-qualified-aio.rst
--- a/docs/troubleshooting/error-messages/kb-fs-not-qualified-aio.rst
+++ b/docs/troubleshooting/error-messages/kb-fs-not-qualified-aio.rst
@@ -17,7 +17,7 @@ There can be two causes for this error:

* Remedy: upgrade your kernel

-Scylla requires using the XFS filesystem, since it is the only Linux filesystem with good Asynchronous I/O support. In addition, Linux kernels before 3.15 did not have good asynchronous append support, which is required by Scylla.
+ScyllaDB requires using the XFS filesystem, since it is the only Linux filesystem with good Asynchronous I/O support. In addition, Linux kernels before 3.15 did not have good asynchronous append support, which is required by ScyllaDB.

If you are using Red Hat Enterprise Linux or CentOS, use version 7.2 or higher of the operating system. These versions contain a kernel that provides the necessary support.

diff --git a/docs/troubleshooting/error-messages/schema-mismatch.rst b/docs/troubleshooting/error-messages/schema-mismatch.rst
--- a/docs/troubleshooting/error-messages/schema-mismatch.rst
+++ b/docs/troubleshooting/error-messages/schema-mismatch.rst
@@ -17,7 +17,7 @@ For example
Problem
^^^^^^^

-One or more Scylla nodes have a schema mismatch.
+One or more ScyllaDB nodes have a schema mismatch.

How to Verify
^^^^^^^^^^^^^
diff --git a/docs/troubleshooting/failed-decommission.rst b/docs/troubleshooting/failed-decommission.rst
--- a/docs/troubleshooting/failed-decommission.rst
+++ b/docs/troubleshooting/failed-decommission.rst
@@ -3,7 +3,7 @@ Failed Decommission

This article describes the troubleshooting procedure when node decommission fails.

-During decommissioning, the streaming process starts, and the node streams its data to the other nodes in the Scylla cluster.
+During decommissioning, the streaming process starts, and the node streams its data to the other nodes in the ScyllaDB cluster.
The process may fail if the node fails to read from the HDD or a network problem occurs.


@@ -26,7 +26,7 @@ The following error message will appear in the logs_:

.. code-block:: shell

- nodetool: Scylla API server HTTP POST to URL '/storage_service/decommission' failed: stream_ranges failed
+ nodetool: ScyllaDB API server HTTP POST to URL '/storage_service/decommission' failed: stream_ranges failed

Solution
^^^^^^^^
diff --git a/docs/troubleshooting/index.rst b/docs/troubleshooting/index.rst
--- a/docs/troubleshooting/index.rst
+++ b/docs/troubleshooting/index.rst
@@ -16,7 +16,7 @@ Troubleshooting ScyllaDB
monitor/index


-Scylla's troubleshooting section contains articles which are targeted to pinpoint and answer problems with Scylla. For broader issues and workarounds, consult the :doc:`Knowledge base </kb/index>`.
+ScyllaDB's troubleshooting section contains articles which are targeted to pinpoint and answer problems with ScyllaDB. For broader issues and workarounds, consult the :doc:`Knowledge base </kb/index>`.
Keep your versions up-to-date. The two latest versions are supported. Also, always install the latest patches for your version.


@@ -32,7 +32,7 @@ Keep your versions up-to-date. The two latest versions are supported. Also, alwa
* :doc:`Data Modeling <modeling/index>`
* :doc:`Data Storage and SSTables <storage/index>`
* :doc:`CQL errors <CQL/index>`
- * :doc:`ScyllaDB Monitoring and Scylla Manager <monitor/index>`
+ * :doc:`ScyllaDB Monitoring and ScyllaDB Manager <monitor/index>`

-Also check out the `Monitoring lesson <https://university.scylladb.com/courses/scylla-operations/lessons/scylla-monitoring/>`_ on Scylla University, which covers how to troubleshoot different issues when running a Scylla cluster.
+Also check out the `Monitoring lesson <https://university.scylladb.com/courses/scylla-operations/lessons/scylla-monitoring/>`_ on ScyllaDB University, which covers how to troubleshoot different issues when running a ScyllaDB cluster.

diff --git a/docs/troubleshooting/large-partition-table.rst b/docs/troubleshooting/large-partition-table.rst
--- a/docs/troubleshooting/large-partition-table.rst
+++ b/docs/troubleshooting/large-partition-table.rst
@@ -1,19 +1,19 @@
ScyllaDB Large Partitions Table
================================

-This document describes how to work with Scylla's large partitions table.
+This document describes how to work with ScyllaDB's large partitions table.
The large partitions table can be used to trace large partitions in a cluster.
The table is updated every time a partition is written and/or deleted,and includes a compaction process which flushes MemTables to SSTables.

Large Partitions can cause any of the following symptoms:

-* Longer latencies on a single shard (look at the "Scylla Overview Metrics" dashboard of `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_).
+* Longer latencies on a single shard (look at the "ScyllaDB Overview Metrics" dashboard of `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_).
* Oversized allocation warning messages in the log (e.g. ``seastar_memory - oversized allocation: 2842624 bytes, please report``)

If you are experiencing any of the above, search to see if you have large partitions.

Note that large partitions are detected only when they are stored in a single SSTable.
-Scylla does not account for data belonging to the same logical partition, but spread across multiple SSTables, as long as any single partition in each SSTable does not cross the large partitions warning threshold.
+ScyllaDB does not account for data belonging to the same logical partition, but spread across multiple SSTables, as long as any single partition in each SSTable does not cross the large partitions warning threshold.
However, note that over time, compaction, and Size-Tiered Compaction Strategy in particular, may collect the dispersed partition data from several SSTables and store it in a single SSTable, thus crossing the large partitions threshold.

Viewing - Find Large Partitions
@@ -88,7 +88,7 @@ Configure
Configure the detection thresholds of large partitions with the ``compaction_large_partition_warning_threshold_mb`` parameter (default: 1000MB)
and the ``compaction_rows_count_warning_threshold`` parameter (default 100000)
in the scylla.yaml configuration file.
-Partitions that are bigger than the size threshold and/or hold more than the rows count threshold are reported in the ``system.large_partitions`` table and generate a warning in the Scylla log (refer to :doc:`log </getting-started/logging/>`).
+Partitions that are bigger than the size threshold and/or hold more than the rows count threshold are reported in the ``system.large_partitions`` table and generate a warning in the ScyllaDB log (refer to :doc:`log </getting-started/logging/>`).

For example (set to 500MB / 50000, respectively):

diff --git a/docs/troubleshooting/large-rows-large-cells-tables.rst b/docs/troubleshooting/large-rows-large-cells-tables.rst
--- a/docs/troubleshooting/large-rows-large-cells-tables.rst
+++ b/docs/troubleshooting/large-rows-large-cells-tables.rst
@@ -1,8 +1,8 @@
ScyllaDB Large Rows and Large Cells Tables
===========================================

-This document describes how to detect large rows and large cells in Scylla.
-Scylla is not optimized for very large rows or large cells. They require allocation of large, contiguous memory areas and therefore may increase latency.
+This document describes how to detect large rows and large cells in ScyllaDB.
+ScyllaDB is not optimized for very large rows or large cells. They require allocation of large, contiguous memory areas and therefore may increase latency.
Rows may also grow over time. For example, many insert operations may add elements to the same collection, or a large blob can be inserted in a single operation.

Similar to the :doc:`large partitions table <large-partition-table>`, the large rows and large cells tables are updated when sstables are written or deleted, for example, on memtable flush or during compaction.
@@ -106,7 +106,7 @@ Configure the detection threshold of large rows and large cells with the corresp
* ``compaction_collection_elements_count_warning_threshold`` parameter (default: 10000).

Once the threshold is reached, the relevant information is captured in the ``system.large_rows`` / ``system.large_cells`` tables.
-In addition, a warning message is logged in the Scylla log (refer to :doc:`logging </getting-started/logging>`).
+In addition, a warning message is logged in the ScyllaDB log (refer to :doc:`logging </getting-started/logging>`).


Storing
diff --git a/docs/troubleshooting/log-level.rst b/docs/troubleshooting/log-level.rst
--- a/docs/troubleshooting/log-level.rst
+++ b/docs/troubleshooting/log-level.rst
@@ -1,7 +1,7 @@
Change the Log Level
====================

-You have the option to change the log level either while the cluster is offline or during runtime. Each log level is assigned to a specific Scylla class. To display the log classes (output changes with each version), run the following:
+You have the option to change the log level either while the cluster is offline or during runtime. Each log level is assigned to a specific ScyllaDB class. To display the log classes (output changes with each version), run the following:

.. code-block:: shell

@@ -11,7 +11,7 @@ You have the option to change the log level either while the cluster is offline
How to Change the Log Level without Downtime
--------------------------------------------

-Scylla presents the user with a variety of loggers that control the amount and detail of information printed to the system logs. This article contains information about how to query and change the log level of each individual logging system.
+ScyllaDB presents the user with a variety of loggers that control the amount and detail of information printed to the system logs. This article contains information about how to query and change the log level of each individual logging system.


To obtain the status of a particular logger:
@@ -32,24 +32,24 @@ To change the status of a particular logger:

Valid log levels are: ``trace``, ``debug``, ``info``, ``warn``, ``error``.

-Alternatively, you can use Nodetool commands. Refer to :doc:`setlogginglevel</operating-scylla/nodetool-commands/setlogginglevel>` to set the logging level threshold for Scylla classes.
+Alternatively, you can use Nodetool commands. Refer to :doc:`setlogginglevel</operating-scylla/nodetool-commands/setlogginglevel>` to set the logging level threshold for ScyllaDB classes.

How to Change Log Level Offline
-------------------------------

-In order to debug issues that occur during the Scylla start, the procedure above will not be helpful. This procedure can be used to assess Scylla Start issues, or it can be used when the cluster is down. Note that any changes made here will only take effect after Scylla starts.
+In order to debug issues that occur during the ScyllaDB start, the procedure above will not be helpful. This procedure can be used to assess ScyllaDB Start issues, or it can be used when the cluster is down. Note that any changes made here will only take effect after ScyllaDB starts.

-Scylla has command line options you can invoke to set the log level. Once set, the change is implemented during the Scylla start. Thus users can append new log level options by editing the SCYLLA_ARGS parameter in ``/etc/sysconfig/scylla-server``.
+ScyllaDB has command line options you can invoke to set the log level. Once set, the change is implemented during the ScyllaDB start. Thus users can append new log level options by editing the SCYLLA_ARGS parameter in ``/etc/sysconfig/scylla-server``.



-**Scylla Options**
+**ScyllaDB Options**

.. list-table::
:widths: 50 50
:header-rows: 1

- * - Scylla Options
+ * - ScyllaDB Options
- Description
* - ``--default-log-level arg (=info)``
- Default log level for log messages. Valid values are trace, debug, info, warn, error.
diff --git a/docs/troubleshooting/manager-monitoring-integration.rst b/docs/troubleshooting/manager-monitoring-integration.rst
--- a/docs/troubleshooting/manager-monitoring-integration.rst
+++ b/docs/troubleshooting/manager-monitoring-integration.rst
@@ -7,32 +7,32 @@ Troubleshooting guide for ScyllaDB Manager and ScyllaDB Monitoring integration
Symptom
-------

-Scylla Manager and Scylla Monitoring are installed, but when you look at Scylla Monitoring, the Scylla Manager dashboard shows the status of Scylla Manager as not connected.
+ScyllaDB Manager and ScyllaDB Monitoring are installed, but when you look at ScyllaDB Monitoring, the ScyllaDB Manager dashboard shows the status of ScyllaDB Manager as not connected.

The following procedure contains several tests to pinpoint the integration issue.

Solution
--------

-1. Verify that Scylla Manager Server is up and running. From the Scylla Monitoring node, run the following Scylla Manager commands:
+1. Verify that ScyllaDB Manager Server is up and running. From the ScyllaDB Monitoring node, run the following ScyllaDB Manager commands:

.. code-block:: none

sctool version
sctool status -c <CLUSTERNAME>

-If you get a response with no errors, Scylla Manager is running.
+If you get a response with no errors, ScyllaDB Manager is running.


-2. Verify that Scylla Monitoring is running with the Manager Dashboard (Monitoring server) by running the command for monitoring, including the ``-M`` flag, which specifies the Manager Dashboard version. For example, 2.0.
+2. Verify that ScyllaDB Monitoring is running with the Manager Dashboard (Monitoring server) by running the command for monitoring, including the ``-M`` flag, which specifies the Manager Dashboard version. For example, 2.0.

.. code-block:: none

/start-all.sh -s path/to/scylla_servers.yml -n path/to/node_exporter_servers.yml -d path/to/mydata -v 3.0 -M 2.0

-3. From Scylla Monitoring, check the Scylla Manager Dashboard and confirm if the Scylla Manager dashboard shows Scylla Manager as connected. If yes, you do not need to continue. If no, continue to the next step.
+3. From ScyllaDB Monitoring, check the ScyllaDB Manager Dashboard and confirm if the ScyllaDB Manager dashboard shows ScyllaDB Manager as connected. If yes, you do not need to continue. If no, continue to the next step.

-4. The issue might be a case where the IP addresses are not synchronized. This happens when Scylla Manager binds the Prometheus API to one IP address and the Prometheus pulls Manager metrics from a different IP address.
+4. The issue might be a case where the IP addresses are not synchronized. This happens when ScyllaDB Manager binds the Prometheus API to one IP address and the Prometheus pulls Manager metrics from a different IP address.

.. note:: When Monitoring and Manager are running on the same server, this IP might be **different** than 127.0.0.1 (localhost).

@@ -47,15 +47,15 @@ If you get a response with no errors, Scylla Manager is running.
prometheus: '172.17.0.1:5090'


- * In ``scylla-monitoring/prometheus/scylla_manager_servers.yml``, change the IP address Prometheus uses to pull Scylla Manager metrics from. The IP address is set to ``172.17.0.1:5090`` by default.
+ * In ``scylla-monitoring/prometheus/scylla_manager_servers.yml``, change the IP address Prometheus uses to pull ScyllaDB Manager metrics from. The IP address is set to ``172.17.0.1:5090`` by default.

.. code-block:: none

- targets:
- 172.17.0.1:5090

-5. If you are not using the Scylla Monitoring stack (Docker), and are using your own Prometheus stack, check that the Scylla Manager target is listed.
-Navigate to: ``http://[Prometheus_IP]:9090/targets (status menu -> targets)``. It may be that only Scylla and Node_Exporter sections are there, and Scylla Manager is missing:
+5. If you are not using the ScyllaDB Monitoring stack (Docker), and are using your own Prometheus stack, check that the ScyllaDB Manager target is listed.
+Navigate to: ``http://[Prometheus_IP]:9090/targets (status menu -> targets)``. It may be that only ScyllaDB and Node_Exporter sections are there, and ScyllaDB Manager is missing:

.. image:: Prometheus1.png

diff --git a/docs/troubleshooting/missing-dotmount-files.rst b/docs/troubleshooting/missing-dotmount-files.rst
--- a/docs/troubleshooting/missing-dotmount-files.rst
+++ b/docs/troubleshooting/missing-dotmount-files.rst
@@ -54,7 +54,7 @@ To restore ``/etc/systemd/system/var-lib-scylla.mount``, run the following (spec
$ UUID=`blkid -s UUID -o value <specify your data disk, eg: /dev/md0>`
$ cat << EOS | sudo tee /etc/systemd/system/var-lib-scylla.mount
[Unit]
- Description=Scylla data directory
+ Description=ScyllaDB data directory
Before=scylla-server.service
After=local-fs.target
DefaultDependencies=no
diff --git a/docs/troubleshooting/modeling/index.rst b/docs/troubleshooting/modeling/index.rst
--- a/docs/troubleshooting/modeling/index.rst
+++ b/docs/troubleshooting/modeling/index.rst
@@ -5,8 +5,8 @@ Data Modeling
:hidden:
:maxdepth: 2

- Scylla Large Partitions Table </troubleshooting/large-partition-table/>
- Scylla Large Rows and Cells Table </troubleshooting/large-rows-large-cells-tables/>
+ ScyllaDB Large Partitions Table </troubleshooting/large-partition-table/>
+ ScyllaDB Large Rows and Cells Table </troubleshooting/large-rows-large-cells-tables/>
Large Partitions Hunting </troubleshooting/debugging-large-partition/>
Failure to Update the Schema </troubleshooting/failed-update-schema>

@@ -15,20 +15,20 @@ Data Modeling
<div class="panel callout radius animated">
<div class="row">
<div class="medium-3 columns">
- <h5 id="getting-started">Scylla Data Modeling</h5>
+ <h5 id="getting-started">ScyllaDB Data Modeling</h5>
</div>
<div class="medium-9 columns">


-* :doc:`Scylla Large Partitions Table </troubleshooting/large-partition-table/>`
+* :doc:`ScyllaDB Large Partitions Table </troubleshooting/large-partition-table/>`

-* :doc:`Scylla Large Rows and Cells Tables </troubleshooting/large-rows-large-cells-tables/>`
+* :doc:`ScyllaDB Large Rows and Cells Tables </troubleshooting/large-rows-large-cells-tables/>`

* :doc:`Large Partitions Hunting </troubleshooting/debugging-large-partition/>`

* :doc:`Failure to Update the Schema </troubleshooting/failed-update-schema>`

-`Data Modeling course <https://university.scylladb.com/courses/data-modeling/>`_ on Scylla University
+`Data Modeling course <https://university.scylladb.com/courses/data-modeling/>`_ on ScyllaDB University

.. raw:: html

diff --git a/docs/troubleshooting/monitor/index.rst b/docs/troubleshooting/monitor/index.rst
--- a/docs/troubleshooting/monitor/index.rst
+++ b/docs/troubleshooting/monitor/index.rst
@@ -9,23 +9,23 @@ ScyllaDB Monitor and Manager
Manager lists healthy nodes as down </troubleshooting/reverse-dns-sshd>

.. panel-box::
- :title: Scylla Monitor and Manager Issues
+ :title: ScyllaDB Monitor and Manager Issues
:id: "getting-started"
:class: my-panel


* `Troubleshoot ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_.
-* :doc:`Troubleshooting guide for Scylla Manager and Scylla Monitoring integration </troubleshooting/manager-monitoring-integration>`
-* :doc:`Scylla Manager is reporting REST API status of healthy nodes as down </troubleshooting/reverse-dns-sshd>`
+* :doc:`Troubleshooting guide for ScyllaDB Manager and ScyllaDB Monitoring integration </troubleshooting/manager-monitoring-integration>`
+* :doc:`ScyllaDB Manager is reporting REST API status of healthy nodes as down </troubleshooting/reverse-dns-sshd>`


.. panel-box::
- :title: Related lessons on Scylla University
+ :title: Related lessons on ScyllaDB University
:id: "getting-started"
:class: my-panel

-`Scylla Monitoring lesson <https://university.scylladb.com/courses/scylla-operations/lessons/scylla-monitoring/>`_ on Scylla University
+`ScyllaDB Monitoring lesson <https://university.scylladb.com/courses/scylla-operations/lessons/scylla-monitoring/>`_ on ScyllaDB University

-`Scylla Manager, Repair and Tombstones lesson <https://university.scylladb.com/courses/scylla-operations/lessons/scylla-manager-repair-and-tombstones/>`_ on Scylla University
+`ScyllaDB Manager, Repair and Tombstones lesson <https://university.scylladb.com/courses/scylla-operations/lessons/scylla-manager-repair-and-tombstones/>`_ on ScyllaDB University


diff --git a/docs/troubleshooting/nodetool-memory-read-timeout.rst b/docs/troubleshooting/nodetool-memory-read-timeout.rst
--- a/docs/troubleshooting/nodetool-memory-read-timeout.rst
+++ b/docs/troubleshooting/nodetool-memory-read-timeout.rst
@@ -14,7 +14,7 @@ When running any Nodetool command, users may see the following error:

Analysis
^^^^^^^^
-Nodetool is a Java based application which requires memory. Scylla by default consumes 93% of the node’s RAM (for MemTables + Cache) and leaves 7% for other applications, such as nodetool.
+Nodetool is a Java based application which requires memory. ScyllaDB by default consumes 93% of the node’s RAM (for MemTables + Cache) and leaves 7% for other applications, such as nodetool.

If cases where this is not enough memory (e.g. small instances with ~64GB RAM or lower), Nodetool may not be able to run due to insufficient memory. In this case an out of memory (OOM) error may appear and scylla-jmx will not run.

@@ -41,7 +41,7 @@ If the service is running you will see something similar to:
.. code-block:: none

sudo service scylla-jmx status
- ● scylla-jmx.service - Scylla JMX
+ ● scylla-jmx.service - ScyllaDB JMX
Loaded: loaded (/lib/systemd/system/scylla-jmx.service; disabled; vendor preset: enabled)
Active: active (running) since Wed 2018-07-18 20:59:08 UTC; 3s ago
Main PID: 256050 (scylla-jmx)
@@ -56,7 +56,7 @@ If it isn't, you will see an error similar to:
.. code-block:: none

sudo systemctl status scylla-jmx
- ● scylla-jmx.service - Scylla JMX
+ ● scylla-jmx.service - ScyllaDB JMX
Loaded: loaded (/usr/lib/systemd/system/scylla-jmx.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2018-05-10 10:34:15 EDT; 3min 47s ago
Process: 1417 ExecStart=/usr/lib/scylla/jmx/scylla-jmx $SCYLLA_JMX_PORT $SCYLLA_API_PORT $SCYLLA_API_ADDR $SCYLLA_JMX_ADDR
@@ -101,7 +101,7 @@ There are two ways to fix this problem, one is faster but may not permanently fi
* Ubuntu: ``/etc/default/scylla-server``.
* Red Hat/ CentOS: ``/etc/sysconfig/scylla-server``
3. In the file you are editing, add to the ``SCYLLA_ARGS`` statement ``--reserve-memory 5G`` (the amount you calculated above). Save and exit.
-4. Restart Scylla server
+4. Restart ScyllaDB server

.. code-block:: none

diff --git a/docs/troubleshooting/password-reset.rst b/docs/troubleshooting/password-reset.rst
--- a/docs/troubleshooting/password-reset.rst
+++ b/docs/troubleshooting/password-reset.rst
@@ -9,7 +9,7 @@ The procedure requires cluster downtime and as a result, all auth data is delete
Procedure
.........

-| 1. Stop Scylla nodes (**Stop all the nodes in the cluster**).
+| 1. Stop ScyllaDB nodes (**Stop all the nodes in the cluster**).

.. code-block:: shell

@@ -21,14 +21,14 @@ Procedure

rm -rf /var/lib/scylla/data/system/role*

-| 3. Start Scylla nodes.
+| 3. Start ScyllaDB nodes.

.. code-block:: shell

sudo systemctl start scylla-server

| 4. Verify that you can log in to your node using ``cqlsh`` command.
-| The access is only possible using Scylla superuser.
+| The access is only possible using ScyllaDB superuser.

.. code-block:: cql

diff --git a/docs/troubleshooting/pointless-compactions.rst b/docs/troubleshooting/pointless-compactions.rst
--- a/docs/troubleshooting/pointless-compactions.rst
+++ b/docs/troubleshooting/pointless-compactions.rst
@@ -6,7 +6,7 @@ This guide describes what to do if you start having many tombstones compactions
Phenomena
^^^^^^^^^

-Scylla's CPU utilization is unexpectedly high and we see too many compactions while there are not many
+ScyllaDB's CPU utilization is unexpectedly high and we see too many compactions while there are not many
WRITE/UPDATE/DELETE/TTLed operations and you see many messages like this in the syslog:

.. code-block:: shell
@@ -16,9 +16,9 @@ WRITE/UPDATE/DELETE/TTLed operations and you see many messages like this in the
Problem
^^^^^^^

-Scylla SSTables can have expired or soon to be expired tombstones in them and there is a need to clean them up eventually.
+ScyllaDB SSTables can have expired or soon to be expired tombstones in them and there is a need to clean them up eventually.
Tombstones can be generated by DELETE operations, TTL data, insertion of null fields or usage of collections.
-Compactions will get rid of expired tombstones, but if there are no compactions currently happening, Scylla may apply
+Compactions will get rid of expired tombstones, but if there are no compactions currently happening, ScyllaDB may apply
heuristics to force compactions on a lone table that has a certain ratio of expired tombstones.

To validate that the SSTable being compacted indeed has tombstones:
diff --git a/docs/troubleshooting/python-error-no-module-named-yaml.rst b/docs/troubleshooting/python-error-no-module-named-yaml.rst
--- a/docs/troubleshooting/python-error-no-module-named-yaml.rst
+++ b/docs/troubleshooting/python-error-no-module-named-yaml.rst
@@ -3,7 +3,7 @@ A change in EPEL broke ScyllaDB Python Script

Phenomena
^^^^^^^^^
-When upgrading CentOS on a Scylla node, Scylla setup script, like scylla_prepare or scylla_setup fails with the following error:
+When upgrading CentOS on a ScyllaDB node, ScyllaDB setup script, like scylla_prepare or scylla_setup fails with the following error:

.. code-block:: python

@@ -17,7 +17,7 @@ When upgrading CentOS on a Scylla node, Scylla setup script, like scylla_prepare
Problem
^^^^^^^

-The source cause is a change in EPEL repository upgrade, breaking backward compatibility by moving from Python34 to Python36, and dropping PyYAML library in the process. Scylla uses PyYAML in a few of its Python scripts.
+The source cause is a change in EPEL repository upgrade, breaking backward compatibility by moving from Python34 to Python36, and dropping PyYAML library in the process. ScyllaDB uses PyYAML in a few of its Python scripts.


Bypass
@@ -32,7 +32,7 @@ Install the python36 version of PyYAML
Solution
^^^^^^^^

-In future releases, we will provide a more robust solution by encapsulating Python as part of Scylla Installation. More on this in the blog post `The Complex Path for a Simple Portable Python Interpreter, or Snakes on a Data Plane <https://www.scylladb.com/2019/02/14/the-complex-path-for-a-simple-portable-python-interpreter-or-snakes-on-a-data-plane/>`_.
+In future releases, we will provide a more robust solution by encapsulating Python as part of ScyllaDB Installation. More on this in the blog post `The Complex Path for a Simple Portable Python Interpreter, or Snakes on a Data Plane <https://www.scylladb.com/2019/02/14/the-complex-path-for-a-simple-portable-python-interpreter-or-snakes-on-a-data-plane/>`_.



diff --git a/docs/troubleshooting/report-scylla-problem.rst b/docs/troubleshooting/report-scylla-problem.rst
--- a/docs/troubleshooting/report-scylla-problem.rst
+++ b/docs/troubleshooting/report-scylla-problem.rst
@@ -6,36 +6,36 @@ In the event there is an issue you would like to report to ScyllaDB support, you

In general, there are two types of issues:

-* **ScyllaDB failure** - There is some kind of failure, possibly due to a connectivity issue, a timeout, or otherwise, where the Scylla server or the Scylla nodes are not working. These cases require you to send :ref:`Scylla Doctor vitals and ScyllaDB logs <report-scylla-problem-scylla-doctor>`, as well as `Core Dump`_ files (if available), to ScyllaDB support.
-* **ScyllaDB performance** - you have noticed some type of degradation of service with Scylla reads or writes. If it is clearly a performance case and not a failure, refer to `Report a performance problem`_.
+* **ScyllaDB failure** - There is some kind of failure, possibly due to a connectivity issue, a timeout, or otherwise, where the ScyllaDB server or the ScyllaDB nodes are not working. These cases require you to send :ref:`ScyllaDB Doctor vitals and ScyllaDB logs <report-scylla-problem-scylla-doctor>`, as well as `Core Dump`_ files (if available), to ScyllaDB support.
+* **ScyllaDB performance** - you have noticed some type of degradation of service with ScyllaDB reads or writes. If it is clearly a performance case and not a failure, refer to `Report a performance problem`_.

Once you have used our diagnostic tools to report the current status, you need to `Send files to ScyllaDB support`_ for further analysis.

-Make sure the Scylla system logs are configured properly to report info level messages: `install debug info <https://github.com/scylladb/scylla/wiki/How-to-install-scylla-debug-info/>`_.
+Make sure the ScyllaDB system logs are configured properly to report info level messages: `install debug info <https://github.com/scylladb/scylla/wiki/How-to-install-scylla-debug-info/>`_.

.. note::
If you are unsure which reports need to be included, `Open a support ticket or GitHub issue`_ and consult with the ScyllaDB team.


.. _report-scylla-problem-scylla-doctor:

-Scylla Doctor
+ScyllaDB Doctor
^^^^^^^^^^^^^^^

-Scylla Doctor is a troubleshooting tool that checks the node status regarding
+ScyllaDB Doctor is a troubleshooting tool that checks the node status regarding
system requirements, configuration, and tuning. The collected information is
output as a ``.vitals.json`` file and an archive file with ScyllaDB logs.
You need to run the tool **on every node in the cluster**.

-#. Download Scylla Doctor as a Linux package or a generic tarball:
+#. Download ScyllaDB Doctor as a Linux package or a generic tarball:

* Ubuntu/Debian (DEB): https://downloads.scylladb.com/downloads/scylla-doctor/deb/
* RHEL/Rocky (RPM): https://downloads.scylladb.com/downloads/scylla-doctor/rpm/
* Tarball: https://downloads.scylladb.com/downloads/scylla-doctor/tar/

-#. Run Scylla Doctor on every node in the cluster.
+#. Run ScyllaDB Doctor on every node in the cluster.

- * If you installed Scylla Doctor with DEB or RPM, you can run it with
+ * If you installed ScyllaDB Doctor with DEB or RPM, you can run it with
the ``scylla-doctor`` command.

* If you downloaded the tarball, extract the ``scylla_doctor.pyz`` file and
@@ -49,9 +49,9 @@ You need to run the tool **on every node in the cluster**.
Make sure you provide a unique host identifier in the filename, such as
the host IP.

- Running Scylla Doctor will generate:
+ Running ScyllaDB Doctor will generate:

- * ``<unique-host-id>.vitals.json`` - Scylla Doctor vitals
+ * ``<unique-host-id>.vitals.json`` - ScyllaDB Doctor vitals
* ``scylla_logs_<timestamp>.tar.gz`` - ScyllaDB logs

**Authenticated Clusters**
@@ -64,7 +64,7 @@ You need to run the tool **on every node in the cluster**.

-sov CQL,user,<CQL user name> -sov CQL,password,<CQL password>

- Scylla Doctor employs cqlsh installed on a given node using the provided
+ ScyllaDB Doctor employs cqlsh installed on a given node using the provided
credentials. Make sure to set up any additional configuration required to
use cqlsh, such as TLS-related information, in the .cqlshrc file.

@@ -86,7 +86,7 @@ You need to run the tool **on every node in the cluster**.
Core Dump
^^^^^^^^^

-When Scylla fails, it creates a core dump which can later be used to debug the issue. The file is written to ``/var/lib/scylla/coredump``. If there is no file in the directory, see `Troubleshooting Core Dump`_.
+When ScyllaDB fails, it creates a core dump which can later be used to debug the issue. The file is written to ``/var/lib/scylla/coredump``. If there is no file in the directory, see `Troubleshooting Core Dump`_.


Compress the core dump file
@@ -111,7 +111,7 @@ In the event the ``/var/lib/scylla/coredump`` directory is empty, the following
Operating System not set to generate core dump files
....................................................

-If Scylla restarts for some reason and there is no core dump file, the OS system daemon needs to be modified.
+If ScyllaDB restarts for some reason and there is no core dump file, the OS system daemon needs to be modified.

**Procedure**

@@ -120,7 +120,7 @@ If Scylla restarts for some reason and there is no core dump file, the OS system
2. Refer to :ref:`generate core dumps <admin-core-dumps>` for details.


-.. note:: You will need spare disk space larger than that of Scylla's RAM.
+.. note:: You will need spare disk space larger than that of ScyllaDB's RAM.


Core dump file exists, but not where you expect it to be
@@ -138,39 +138,39 @@ If the ``scylla/coredump`` directory is empty even after you changed the custom
Report a performance problem
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-If you are experiencing a performance issue when using Scylla, let us know and we can help. To save time and increase the likelihood of a speedy solution, it is important to supply us with as much information as possible.
+If you are experiencing a performance issue when using ScyllaDB, let us know and we can help. To save time and increase the likelihood of a speedy solution, it is important to supply us with as much information as possible.

Include the following information in your report:

-* Complete :ref:`Scylla Doctor Vitals <report-scylla-problem-scylla-doctor>`
+* Complete :ref:`ScyllaDB Doctor Vitals <report-scylla-problem-scylla-doctor>`
* A `Server Metrics`_ Report
* A `Client Metrics`_ Report
* The contents of your tracing data. See :ref:`Collecting Tracing Data <tracing-collecting-tracing-data>`.

Metrics Reports
...............

-There are two types of metrics you need to collect: Scylla Server and Scylla Client (node). The Scylla Server metrics can be displayed using an external monitoring service like `Scylla Monitoring Stack <https://monitoring.docs.scylladb.com/>`_ or they can be collected using `scyllatop <http://www.scylladb.com/2016/03/22/scyllatop/>`_ and other commands.
+There are two types of metrics you need to collect: ScyllaDB Server and ScyllaDB Client (node). The ScyllaDB Server metrics can be displayed using an external monitoring service like `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/>`_ or they can be collected using `scyllatop <http://www.scylladb.com/2016/03/22/scyllatop/>`_ and other commands.

.. note::
- It is highly recommended to use the Scylla monitoring stack so that the Prometheus metrics collected can be shared.
+ It is highly recommended to use the ScyllaDB monitoring stack so that the Prometheus metrics collected can be shared.

Server Metrics
~~~~~~~~~~~~~~

-There are several commands you can use to see if there is a performance issue on the Scylla Server. Note that checking the CPU load using ``top`` is not a good metric for checking Scylla.
+There are several commands you can use to see if there is a performance issue on the ScyllaDB Server. Note that checking the CPU load using ``top`` is not a good metric for checking ScyllaDB.
Use ``scyllatop`` instead.

.. note::
- To help the ScyllaDB support team assess your problem, it is best to pipe the results to a file which you can attach with Scylla Doctor vitals and ScyllaDB logs.
+ To help the ScyllaDB support team assess your problem, it is best to pipe the results to a file which you can attach with ScyllaDB Doctor vitals and ScyllaDB logs.

-1. Check the ``Send files to ScyllaDB supportgauge-load``. If the load is close to 100%, the bottleneck is Scylla CPU.
+1. Check the ``Send files to ScyllaDB supportgauge-load``. If the load is close to 100%, the bottleneck is ScyllaDB CPU.

.. code-block:: shell

scyllatop *gauge-load

-2. Check if one of Scylla core is busier than the others:
+2. Check if one of ScyllaDB core is busier than the others:

.. code-block:: shell

@@ -224,7 +224,7 @@ You can also see the results in `./report` dir
Server Metrics with Prometheus
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

-When using `Grafana and Prometheus to monitor Scylla <https://github.com/scylladb/scylla-monitoring>`_, sharing the metrics stored in Prometheus is very useful. This procedure shows how to gather the metrics from the monitoring server.
+When using `Grafana and Prometheus to monitor ScyllaDB <https://github.com/scylladb/scylla-monitoring>`_, sharing the metrics stored in Prometheus is very useful. This procedure shows how to gather the metrics from the monitoring server.

**Procedure**

@@ -252,7 +252,7 @@ When using `Grafana and Prometheus to monitor Scylla <https://github.com/scyllad
Client Metrics
~~~~~~~~~~~~~~

-Check the client CPU using ``top``. If the CPU is close to 100%, the bottleneck is the client CPU. In this case, you should add more loaders to stress Scylla.
+Check the client CPU using ``top``. If the CPU is close to 100%, the bottleneck is the client CPU. In this case, you should add more loaders to stress ScyllaDB.

.. _report-problem-send-files-to-support:

@@ -303,17 +303,17 @@ If you have not done so already, supply ScyllaDB support with the UUID. Keep in

1. Do *one* of the following:

-* If you are a Scylla customer, open a `Support Ticket`_ and **include the UUID** within the ticket.
+* If you are a ScyllaDB customer, open a `Support Ticket`_ and **include the UUID** within the ticket.

.. _Support Ticket: http://scylladb.com/support


-* If you are a Scylla user, open an issue on `GitHub`_ and **include the UUID** within the issue.
+* If you are a ScyllaDB user, open an issue on `GitHub`_ and **include the UUID** within the issue.

.. _GitHub: https://github.com/scylladb/scylla/issues/new


See Also
........

-`Scylla benchmark results <http://www.scylladb.com/technology/cassandra-vs-scylla-benchmark-cluster-1/>`_ for an example of the level of details required in your reports.
+`ScyllaDB benchmark results <http://www.scylladb.com/technology/cassandra-vs-scylla-benchmark-cluster-1/>`_ for an example of the level of details required in your reports.
diff --git a/docs/troubleshooting/reverse-dns-sshd.rst b/docs/troubleshooting/reverse-dns-sshd.rst
--- a/docs/troubleshooting/reverse-dns-sshd.rst
+++ b/docs/troubleshooting/reverse-dns-sshd.rst
@@ -1,24 +1,24 @@
ScyllaDB Manager: connection to sshd server is slow or timing out
===================================================================

-This troubleshooting guide describes what to do if you experience slow Scylla
-Manager behavior or when connections to Scylla nodes over SSH are timing out.
+This troubleshooting guide describes what to do if you experience slow ScyllaDB
+Manager behavior or when connections to ScyllaDB nodes over SSH are timing out.

Phenomenon
^^^^^^^^^^

-This might affect users of the Scylla Manager when determining REST API status
-of the managed clusters. Scylla Manager Client might report certain nodes as
+This might affect users of the ScyllaDB Manager when determining REST API status
+of the managed clusters. ScyllaDB Manager Client might report certain nodes as
being down even if they are accessible.

Background
^^^^^^^^^^

-Scylla Manager manages the Scylla nodes over the HTTP API. Communication
-between Scylla Manager server and Scylla nodes is encrypted by tunneling HTTP
+ScyllaDB Manager manages the ScyllaDB nodes over the HTTP API. Communication
+between ScyllaDB Manager server and ScyllaDB nodes is encrypted by tunneling HTTP
traffic over an SSH connection.

-Establishing an SSH tunnel requires that Scylla nodes have a running sshd
+Establishing an SSH tunnel requires that ScyllaDB nodes have a running sshd
server and optionally the sshd server can be configured to do reverse DNS
for resolving client IPs.
When resolving takes a long time or it stalls for some reason then connections
@@ -29,7 +29,7 @@ Solution

There are two options for solving this.

-One option is to improve your DNS setup on the Scylla node by changing to a
+One option is to improve your DNS setup on the ScyllaDB node by changing to a
better DNS resolver. It is recommended to use static nameserver IPs from
`Cloudflare <https://www.cloudflare.com/learning/dns/what-is-1.1.1.1/>`_
or `Google <https://developers.google.com/speed/public-dns/>`_.
diff --git a/docs/troubleshooting/scylla-wont-start.rst b/docs/troubleshooting/scylla-wont-start.rst
--- a/docs/troubleshooting/scylla-wont-start.rst
+++ b/docs/troubleshooting/scylla-wont-start.rst
@@ -9,7 +9,7 @@ The scylla process stopped hours ago and it won’t start
How to Verify
^^^^^^^^^^^^^

-Possible cause: The Scylla process is managed by systemd, and systemd expects it to be able to fully start within a timeout. If this timeout is reached, systemd will kill the Scylla process and try to start it again. If that is the case, you will see the following message in the Scylla logs:
+Possible cause: The ScyllaDB process is managed by systemd, and systemd expects it to be able to fully start within a timeout. If this timeout is reached, systemd will kill the ScyllaDB process and try to start it again. If that is the case, you will see the following message in the ScyllaDB logs:

.. code-block:: shell

diff --git a/docs/troubleshooting/space-up.rst b/docs/troubleshooting/space-up.rst
--- a/docs/troubleshooting/space-up.rst
+++ b/docs/troubleshooting/space-up.rst
@@ -1,15 +1,15 @@
Space Utilization Keeps Going Up During Normal Operation
========================================================

-This troubleshooting guide describes what to do when Scylla space usage keeps going up.
+This troubleshooting guide describes what to do when ScyllaDB space usage keeps going up.

Problem
^^^^^^^

Over the lifetime of the cluster, old data is compacted together into new SSTables, removing the old.
Spikes in storage utilization are expected during compactions but if it doesn't reduce after a compaction
finishes it can be indicative of a problem.
-You can use the ``lsof`` Linux utility to check if there are files that Scylla has deleted but whose
+You can use the ``lsof`` Linux utility to check if there are files that ScyllaDB has deleted but whose
deletion are not reflected in the filesystem.

For example:
@@ -26,14 +26,14 @@ Solution

1. If you are running repairs or large reads, those could keep references to old files. Monitor those operations to see if space utilization goes down once they finish.

-2. If the utilization problem persists and you are not running repairs or performing large reads, it could be an indication of a Scylla bug.
- Contact the Scylla team an provide the following data:
+2. If the utilization problem persists and you are not running repairs or performing large reads, it could be an indication of a ScyllaDB bug.
+ Contact the ScyllaDB team an provide the following data:

* ``journalctl -u scylla-server > scylla_logs.txt``

* ``ls -lhRS find /var/lib/scylla/data/ > file_list.txt``

-3. In the mean time, restarting the Scylla nodes will release the references and free up the space.
+3. In the mean time, restarting the ScyllaDB nodes will release the references and free up the space.

.. include:: /rst_include/scylla-commands-restart-index.rst

diff --git a/docs/troubleshooting/sstable-corruption.rst b/docs/troubleshooting/sstable-corruption.rst
--- a/docs/troubleshooting/sstable-corruption.rst
+++ b/docs/troubleshooting/sstable-corruption.rst
@@ -1,13 +1,13 @@
ScyllaDB Fails to Start - SSTable Corruption Problem
=====================================================

-This troubleshooting guide describes what to do when Scylla fails to start due to a corrupted SSTables.
+This troubleshooting guide describes what to do when ScyllaDB fails to start due to a corrupted SSTables.
Corruption can be a result of a bug, disk issue or human error, for example deleting one of the SSTable files


Problem
^^^^^^^
-Scylla node fails to start, node status shows that the node is down (DN)
+ScyllaDB node fails to start, node status shows that the node is down (DN)

How to Verify
^^^^^^^^^^^^^
@@ -19,7 +19,7 @@ For example:

scylla[28659]: [shard 0] database - Exception while populating keyspace '<mykeyspace>' with 'test' table from file '/var/lib/scylla/data/mykeyspace/test-fa9994e02fd811e7a4ee000000000000': sstables::malformed_sstable_exception (At directory:/var/lib/scylla/data/mykeyspace/test-fa9994e02fd811e7a4ee000000000000: no TOC found for SSTable with generation 2!. Refusing to boot)

-In this scenario, a missing ``TOC`` file will prevent the Scylla node from starting.
+In this scenario, a missing ``TOC`` file will prevent the ScyllaDB node from starting.

The SSTable corporation problem can be different, for example, other missing or unreadable files. The following solution apply for all of the scenarios.

@@ -42,11 +42,11 @@ For example:
-rw-r--r-- 1 scylla scylla 10 May 8 14:17 test-ka-2-Digest.sha1
-rw-r--r-- 1 scylla scylla 24 May 8 14:17 test-ka-2-Filter.db
-rw-r--r-- 1 scylla scylla 140 May 8 14:17 test-ka-2-Index.db
- -rw-r--r-- 1 scylla scylla 38 May 8 14:17 test-ka-2-Scylla.db
+ -rw-r--r-- 1 scylla scylla 38 May 8 14:17 test-ka-2-ScyllaDB.db
-rw-r--r-- 1 scylla scylla 4446 May 8 14:17 test-ka-2-Statistics.db
-rw-r--r-- 1 scylla scylla 92 May 8 14:17 test-ka-2-Summary.db

-3. Start Scylla node
+3. Start ScyllaDB node

``sudo systemctl start scylla-server``

diff --git a/docs/troubleshooting/startup/index.rst b/docs/troubleshooting/startup/index.rst
--- a/docs/troubleshooting/startup/index.rst
+++ b/docs/troubleshooting/startup/index.rst
@@ -6,22 +6,22 @@ ScyllaDB Startup
:maxdepth: 2

Ownership Problems </troubleshooting/change-ownership/>
- Scylla will not Start </troubleshooting/scylla-wont-start/>
- Scylla Python Script broken </troubleshooting/python-error-no-module-named-yaml>
+ ScyllaDB will not Start </troubleshooting/scylla-wont-start/>
+ ScyllaDB Python Script broken </troubleshooting/python-error-no-module-named-yaml>

.. raw:: html

<div class="panel callout radius animated">
<div class="row">
<div class="medium-3 columns">
- <h5 id="getting-started">Scylla Startup Issues</h5>
+ <h5 id="getting-started">ScyllaDB Startup Issues</h5>
</div>
<div class="medium-9 columns">


-* :doc:`Scylla Fails to Start Due to Wrong Ownership Problems </troubleshooting/change-ownership/>`
-* :doc:`Scylla will not Start </troubleshooting/scylla-wont-start/>`
-* :doc:`A change in EPEL broke Scylla Python Script </troubleshooting/python-error-no-module-named-yaml>`
+* :doc:`ScyllaDB Fails to Start Due to Wrong Ownership Problems </troubleshooting/change-ownership/>`
+* :doc:`ScyllaDB will not Start </troubleshooting/scylla-wont-start/>`
+* :doc:`A change in EPEL broke ScyllaDB Python Script </troubleshooting/python-error-no-module-named-yaml>`



diff --git a/docs/troubleshooting/support/index.rst b/docs/troubleshooting/support/index.rst
--- a/docs/troubleshooting/support/index.rst
+++ b/docs/troubleshooting/support/index.rst
@@ -5,7 +5,7 @@ Errors and Support
:hidden:
:maxdepth: 2

- Report a Scylla problem </troubleshooting/report-scylla-problem>
+ Report a ScyllaDB problem </troubleshooting/report-scylla-problem>
Error Messages </troubleshooting/error-messages/index>
Change Log Level </troubleshooting/log-level>

@@ -14,11 +14,11 @@ Errors and Support
<div class="panel callout radius animated">
<div class="row">
<div class="medium-3 columns">
- <h5 id="getting-started">Scylla Errors and Support</h5>
+ <h5 id="getting-started">ScyllaDB Errors and Support</h5>
</div>
<div class="medium-9 columns">

-* :doc:`How to Report a Scylla Problem </troubleshooting/report-scylla-problem/>`
+* :doc:`How to Report a ScyllaDB Problem </troubleshooting/report-scylla-problem/>`

* :doc:`Error Messages </troubleshooting/error-messages/index/>`

@@ -28,7 +28,7 @@ Errors and Support

* :ref:`What to Do if You're Having a Performance Issue <report-performance-problem>`

-Also check out the `Onboarding lesson <https://university.scylladb.com/courses/scylla-operations/lessons/onboarding/topic/onboarding/>`_ on Scylla University
+Also check out the `Onboarding lesson <https://university.scylladb.com/courses/scylla-operations/lessons/onboarding/topic/onboarding/>`_ on ScyllaDB University

.. raw:: html

diff --git a/docs/upgrade/_common/warning.rst b/docs/upgrade/_common/warning.rst
--- a/docs/upgrade/_common/warning.rst
+++ b/docs/upgrade/_common/warning.rst
@@ -2,7 +2,7 @@

.. warning::

- If you are using CDC and upgrading Scylla 4.3 to 4.4, please review the API updates in :doc:`querying CDC streams </using-scylla/cdc/cdc-querying-streams>` and :doc:`CDC stream generations </using-scylla/cdc/cdc-stream-generations>`.
+ If you are using CDC and upgrading ScyllaDB 4.3 to 4.4, please review the API updates in :doc:`querying CDC streams </using-scylla/cdc/cdc-querying-streams>` and :doc:`CDC stream generations </using-scylla/cdc/cdc-stream-generations>`.
In particular, you should update applications that use CDC according to :ref:`CDC Upgrade notes <scylla-4-3-to-4-4-upgrade>` **before** upgrading the cluster to 4.4.

If you are using CDC and upgrading from pre 4.3 version to 4.3, note the :doc:`upgrading from experimental CDC </kb/cdc-experimental-upgrade>`.
diff --git a/docs/upgrade/ami-upgrade.rst b/docs/upgrade/ami-upgrade.rst
--- a/docs/upgrade/ami-upgrade.rst
+++ b/docs/upgrade/ami-upgrade.rst
@@ -12,4 +12,4 @@ If you’re using your own image and have installed ScyllaDB packages for Ubuntu
follow the extended upgrade procedure on the **EC2/GCP/Azure Ubuntu image** tab
in the :doc:`upgrade guide </upgrade/index/>` for your ScyllaDB version.

-To check your Scylla version, run the ``scylla --version`` command.
+To check your ScyllaDB version, run the ``scylla --version`` command.
diff --git a/docs/upgrade/index.rst b/docs/upgrade/index.rst
--- a/docs/upgrade/index.rst
+++ b/docs/upgrade/index.rst
@@ -45,7 +45,7 @@ Procedures for Upgrading ScyllaDB

* :doc:`Upgrade ScyllaDB Open Source <upgrade-opensource/index>`

-* :doc:`Upgrade from ScyllaDB Open Source to Scylla Enterprise <upgrade-to-enterprise/index>`
+* :doc:`Upgrade from ScyllaDB Open Source to ScyllaDB Enterprise <upgrade-to-enterprise/index>`

* :doc:`Upgrade ScyllaDB Image <ami-upgrade>`

diff --git a/docs/upgrade/upgrade-opensource/upgrade-guide-from-5.4-to-6.0/metric-update-5.4-to-6.0.rst b/docs/upgrade/upgrade-opensource/upgrade-guide-from-5.4-to-6.0/metric-update-5.4-to-6.0.rst
--- a/docs/upgrade/upgrade-opensource/upgrade-guide-from-5.4-to-6.0/metric-update-5.4-to-6.0.rst
+++ b/docs/upgrade/upgrade-opensource/upgrade-guide-from-5.4-to-6.0/metric-update-5.4-to-6.0.rst
@@ -1,8 +1,8 @@
.. |SRC_VERSION| replace:: 5.4
.. |NEW_VERSION| replace:: 6.0

-ScyllaDB Metric Update - Scylla |SRC_VERSION| to |NEW_VERSION|
-====================================================================
+ScyllaDB Metric Update - ScyllaDB |SRC_VERSION| to |NEW_VERSION|
+================================================================

.. toctree::
:maxdepth: 2
diff --git a/docs/upgrade/upgrade-opensource/upgrade-guide-from-5.4-to-6.0/upgrade-guide-from-5.4-to-6.0-generic.rst b/docs/upgrade/upgrade-opensource/upgrade-guide-from-5.4-to-6.0/upgrade-guide-from-5.4-to-6.0-generic.rst
--- a/docs/upgrade/upgrade-opensource/upgrade-guide-from-5.4-to-6.0/upgrade-guide-from-5.4-to-6.0-generic.rst
+++ b/docs/upgrade/upgrade-opensource/upgrade-guide-from-5.4-to-6.0/upgrade-guide-from-5.4-to-6.0-generic.rst
@@ -28,7 +28,7 @@
.. |ROLLBACK| replace:: rollback
.. _ROLLBACK: ./#rollback-procedure

-.. |SCYLLA_METRICS| replace:: Scylla Metrics Update - Scylla 5.4 to 6.0
+.. |SCYLLA_METRICS| replace:: ScyllaDB Metrics Update - ScyllaDB 5.4 to 6.0
.. _SCYLLA_METRICS: ../metric-update-5.4-to-6.0

=============================================================================
@@ -315,7 +315,7 @@ ScyllaDB rollback is a rolling procedure that does **not** require full cluster
For each of the nodes you rollback to |SRC_VERSION|, serially (i.e., one node
at a time), you will:

-* Drain the node and stop Scylla
+* Drain the node and stop ScyllaDB
* Retrieve the old ScyllaDB packages
* Restore the configuration file
* Reload systemd configuration
diff --git a/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.2-to-2023.1/index.rst b/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.2-to-2023.1/index.rst
--- a/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.2-to-2023.1/index.rst
+++ b/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.2-to-2023.1/index.rst
@@ -17,4 +17,4 @@ Upgrade - ScyllaDB 5.2 to ScyllaDB Enterprise 2023.1


* :doc:`Upgrade ScyllaDB from 5.2.x to 2023.1.y <upgrade-guide-from-5.2-to-2023.1-generic>`
- * :doc:`ScyllaDB Metrics Update - Scylla 5.2 to 2023.1 <metric-update-5.2-to-2023.1>`
\ No newline at end of file
+ * :doc:`ScyllaDB Metrics Update - ScyllaDB 5.2 to 2023.1 <metric-update-5.2-to-2023.1>`
\ No newline at end of file
diff --git a/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.2-to-2023.1/metric-update-5.2-to-2023.1.rst b/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.2-to-2023.1/metric-update-5.2-to-2023.1.rst
--- a/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.2-to-2023.1/metric-update-5.2-to-2023.1.rst
+++ b/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.2-to-2023.1/metric-update-5.2-to-2023.1.rst
@@ -1,5 +1,5 @@
-=================================================================
-ScyllaDB Metric Update - Scylla 5.2 to Scylla Enterprise 2023.1
-=================================================================
+===================================================================
+ScyllaDB Metric Update - ScyllaDB 5.2 to ScyllaDB Enterprise 2023.1
+===================================================================

There are no metric updates in ScyllaDB Enterprise 2023.1 compared to ScyllaDB 5.2.
diff --git a/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.4-to-2024.1/index.rst b/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.4-to-2024.1/index.rst
--- a/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.4-to-2024.1/index.rst
+++ b/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.4-to-2024.1/index.rst
@@ -11,5 +11,5 @@ Upgrade - ScyllaDB 5.4 to ScyllaDB Enterprise 2024.1
Metrics <metric-update-5.4-to-2024.1>

* :doc:`Upgrade ScyllaDB from 5.4.x to 2024.1.y <upgrade-guide-from-5.4-to-2024.1-generic>`
-* :doc:`ScyllaDB Metrics Update - Scylla 5.4 to 2024.1 <metric-update-5.4-to-2024.1>`
+* :doc:`ScyllaDB Metrics Update - ScyllaDB 5.4 to 2024.1 <metric-update-5.4-to-2024.1>`

diff --git a/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.4-to-2024.1/metric-update-5.4-to-2024.1.rst b/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.4-to-2024.1/metric-update-5.4-to-2024.1.rst
--- a/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.4-to-2024.1/metric-update-5.4-to-2024.1.rst
+++ b/docs/upgrade/upgrade-to-enterprise/upgrade-guide-from-5.4-to-2024.1/metric-update-5.4-to-2024.1.rst
@@ -1,5 +1,5 @@
-=================================================================
-ScyllaDB Metric Update - Scylla 5.4 to Scylla Enterprise 2024.1
-=================================================================
+===================================================================
+ScyllaDB Metric Update - ScyllaDB 5.4 to ScyllaDB Enterprise 2024.1
+===================================================================

There are no metric updates in ScyllaDB Enterprise 2024.1 compared to ScyllaDB 5.4.
\ No newline at end of file
diff --git a/docs/using-scylla/_common/alternator-description.rst b/docs/using-scylla/_common/alternator-description.rst
--- a/docs/using-scylla/_common/alternator-description.rst
+++ b/docs/using-scylla/_common/alternator-description.rst
@@ -1,4 +1,4 @@
-Scylla Alternator: The Open Source DynamoDB-compatible API
+ScyllaDB Alternator: The Open Source DynamoDB-compatible API

Project Alternator is an open-source project for an Amazon DynamoDB™-compatible API written in C++. The goal of this project is to deliver an open source alternative to Amazon’s DynamoDB, deployable wherever a user would want: on-premises, on other public clouds like Microsoft Azure or Google Cloud Platform, or still on AWS (for users who wish to take advantage of other aspects of Amazon’s market-leading cloud ecosystem, such as the high-density i3en instances).

diff --git a/docs/using-scylla/alternator/index.rst b/docs/using-scylla/alternator/index.rst
--- a/docs/using-scylla/alternator/index.rst
+++ b/docs/using-scylla/alternator/index.rst
@@ -1,11 +1,11 @@
-=================
-Scylla Alternator
-=================
+===================
+ScyllaDB Alternator
+===================

.. include:: /using-scylla/_common/alternator-description.rst

-* :doc:`Scylla Alternator Documentation </alternator/alternator>`
-* `Scylla Alternator project <https://github.com/scylladb/scylla/tree/master/alternator>`_ - Part of the Scylla project
-* `Scylla Alternator course <https://university.scylladb.com/courses/scylla-alternator/>`_ on Scylla University
+* :doc:`ScyllaDB Alternator Documentation </alternator/alternator>`
+* `ScyllaDB Alternator project <https://github.com/scylladb/scylla/tree/master/alternator>`_ - Part of the ScyllaDB project
+* `ScyllaDB Alternator course <https://university.scylladb.com/courses/scylla-alternator/>`_ on ScyllaDB University


diff --git a/docs/using-scylla/cassandra-compatibility.rst b/docs/using-scylla/cassandra-compatibility.rst
--- a/docs/using-scylla/cassandra-compatibility.rst
+++ b/docs/using-scylla/cassandra-compatibility.rst
@@ -100,7 +100,7 @@ Consistency Level (read and write)
| LOCAL_SERIAL | |v|:sup:`*` |
+-------------------------------------+--------------+

-:sup:`*` See :doc:`Scylla LWT </using-scylla/lwt>`.
+:sup:`*` See :doc:`ScyllaDB LWT </using-scylla/lwt>`.


Snitches
diff --git a/docs/using-scylla/cdc/_common/cdc-inserts.rst b/docs/using-scylla/cdc/_common/cdc-inserts.rst
--- a/docs/using-scylla/cdc/_common/cdc-inserts.rst
+++ b/docs/using-scylla/cdc/_common/cdc-inserts.rst
@@ -4,7 +4,7 @@ Inserts
Digression: the difference between inserts and updates
++++++++++++++++++++++++++++++++++++++++++++++++++++++

-Inserts are not the same as updates, contrary to a popular belief in Cassandra/Scylla communities. The following example illustrates the difference:
+Inserts are not the same as updates, contrary to a popular belief in Cassandra/ScyllaDB communities. The following example illustrates the difference:

.. code-block:: cql

diff --git a/docs/using-scylla/cdc/_common/cdc-updates.rst b/docs/using-scylla/cdc/_common/cdc-updates.rst
--- a/docs/using-scylla/cdc/_common/cdc-updates.rst
+++ b/docs/using-scylla/cdc/_common/cdc-updates.rst
@@ -81,10 +81,10 @@ Note that column deletions, (which are equivalent to updates that set a column t

You can read about row deletions in the :ref:`corresponding section <row-deletions>`.

-Digression: static rows in Scylla
-+++++++++++++++++++++++++++++++++
+Digression: static rows in ScyllaDB
++++++++++++++++++++++++++++++++++++

-If a table in Scylla has static columns, then every partition in this table contains a *static row*, which is global for the partition. This static row is different from the clustered rows: it contains values for partition key columns and static columns, while clustered rows contain values for partition key, clustering key, and regular columns. The following example illustrates how the static row can be used:
+If a table in ScyllaDB has static columns, then every partition in this table contains a *static row*, which is global for the partition. This static row is different from the clustered rows: it contains values for partition key columns and static columns, while clustered rows contain values for partition key, clustering key, and regular columns. The following example illustrates how the static row can be used:

.. code-block:: cql

diff --git a/docs/using-scylla/cdc/cdc-advanced-types.rst b/docs/using-scylla/cdc/cdc-advanced-types.rst
--- a/docs/using-scylla/cdc/cdc-advanced-types.rst
+++ b/docs/using-scylla/cdc/cdc-advanced-types.rst
@@ -107,7 +107,7 @@ result:
--------------------------------------+----+----+------+---------------+------------------------
5bb26094-2f40-11eb-63e9-a0d4519f9c1b | 0 | 0 | null | null | {1, 2, 3}

-Note that the elements don't need to exist to be removed. Removing an element is expressed by adding a special ``tombstone`` value (as usual in Scylla) under the given key. Thus, we can understand the ``cdc$deleted_elements_X`` column as showing the set of keys which were assigned tombstones in the corresponding statement. Recall that a tombstone removes a value if its timestamp is greater than or equal to the value's timestamp.
+Note that the elements don't need to exist to be removed. Removing an element is expressed by adding a special ``tombstone`` value (as usual in ScyllaDB) under the given key. Thus, we can understand the ``cdc$deleted_elements_X`` column as showing the set of keys which were assigned tombstones in the corresponding statement. Recall that a tombstone removes a value if its timestamp is greater than or equal to the value's timestamp.

Deleting values for specific keys in CQL as above can only be done using an ``UPDATE`` statement with the ``- {...}`` notation.

@@ -226,7 +226,7 @@ and gives:

We've explained that the ``SET v = {...}`` notation creates a `collection-wide tombstone`, and tombstones delete all values that have timestamps lower than or equal to the tombstone's timestamp. How is it then possible to both delete a collection and add elements to it in the same statement?

-Each CQL statement that arrives to Scylla comes with a timestamp (or multiple timestamps, in case of specially constructed batches, but that's rare); generally, it is the timestamp that's assigned to the written data. However, collection-wide tombstones written by ``UPDATE ... SET X = {...}`` statements or ``INSERT`` statements are an exception.
+Each CQL statement that arrives to ScyllaDB comes with a timestamp (or multiple timestamps, in case of specially constructed batches, but that's rare); generally, it is the timestamp that's assigned to the written data. However, collection-wide tombstones written by ``UPDATE ... SET X = {...}`` statements or ``INSERT`` statements are an exception.

The rule is as follows:

@@ -238,7 +238,7 @@ This is what makes the above behavior possible. Suppose that the statement

UPDATE ks.t SET v = {1: 'v1', 2: 'v2'} WHERE pk = 0 AND ck = 0;

-has timestamp ``T``. It is translated by Scylla into 3 pieces of information: an element ``(1, 'v1')`` with timestamp ``T``, an element ``(2, 'v2')`` with timestamp ``T``, and a collection-wide tombstone with timestamp ``T-1``. The tombstone will therefore remove all elements that have timestamps lower than or equal to ``T - 1``, but will not remove the elements ``(1, 'v1'), (2, 'v2')``, since their timestamps are greater.
+has timestamp ``T``. It is translated by ScyllaDB into 3 pieces of information: an element ``(1, 'v1')`` with timestamp ``T``, an element ``(2, 'v2')`` with timestamp ``T``, and a collection-wide tombstone with timestamp ``T-1``. The tombstone will therefore remove all elements that have timestamps lower than or equal to ``T - 1``, but will not remove the elements ``(1, 'v1'), (2, 'v2')``, since their timestamps are greater.

**Warning**: this rule **does not** apply when deleting collections using a column ``DELETE``. In that case, the original timestamp is used. The following example illustrates that:

@@ -424,7 +424,7 @@ The rules for timestamps described in the :ref:`cdc_collection_tombstones` secti
Lists
-----

-Non-frozen lists are possibly the weirdest types you can find in Scylla (and Cassandra). Perhaps it's surprising when we say that non-frozen lists are also special cases of non-frozen maps; when querying tables that use lists, however, the `key` is hidden and only the values are shown. The type of the key used in the internal map representation of a list is ``timeuuid``.
+Non-frozen lists are possibly the weirdest types you can find in ScyllaDB (and Cassandra). Perhaps it's surprising when we say that non-frozen lists are also special cases of non-frozen maps; when querying tables that use lists, however, the `key` is hidden and only the values are shown. The type of the key used in the internal map representation of a list is ``timeuuid``.

Although you can't see list keys when using CQL read queries, you can `update` the value under any given key. For example:

@@ -441,9 +441,9 @@ Thus, the syntax is:

where ``X`` is the list column name, ``k`` is a timeuuid, and ``v`` is a value of the list's value type.

-The keys define the order of elements in the list. When using the standard list update syntax (e.g. ``SET v = v + [1, 2]``), the timeuuids are automatically generated by Scylla using the current time. This method allows fast, conflict-free concurrent updates to the list (such as appending or prepending elements). This list representation is a simple example of a `CRDT <https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type>`_.
+The keys define the order of elements in the list. When using the standard list update syntax (e.g. ``SET v = v + [1, 2]``), the timeuuids are automatically generated by ScyllaDB using the current time. This method allows fast, conflict-free concurrent updates to the list (such as appending or prepending elements). This list representation is a simple example of a `CRDT <https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type>`_.

-In the CDC log table, the key is revealed to the user. The design goals of CDC enforce this: the user should have the possibility of replaying `the exact same sequence of changes` to another Scylla cluster in order to obtain the same result. In the case of lists, one can't allow the other cluster to generate the timeuuids since the resulting order of elements in the list may end up different; the user must specify the keys on their own, thus they must be able to learn what the keys are in the first place.
+In the CDC log table, the key is revealed to the user. The design goals of CDC enforce this: the user should have the possibility of replaying `the exact same sequence of changes` to another ScyllaDB cluster in order to obtain the same result. In the case of lists, one can't allow the other cluster to generate the timeuuids since the resulting order of elements in the list may end up different; the user must specify the keys on their own, thus they must be able to learn what the keys are in the first place.

Let's start with an example:

@@ -510,7 +510,7 @@ as we can see, the added element's key is equal to the timeuuid we've used in th
Deleting elements
+++++++++++++++++

-The `deleted elements` column describes elements that are removed from the list. Scylla offers the ``v = v - [...]`` syntax for removing all elements whose values appear in the provided list of values. To do this, Scylla first performs a read, obtaining the set of all keys that contain the values from the provided list, and then writes a tombstone for each obtained key. For example:
+The `deleted elements` column describes elements that are removed from the list. ScyllaDB offers the ``v = v - [...]`` syntax for removing all elements whose values appear in the provided list of values. To do this, ScyllaDB first performs a read, obtaining the set of all keys that contain the values from the provided list, and then writes a tombstone for each obtained key. For example:

.. code-block:: cql

@@ -580,7 +580,7 @@ The rules for timestamps described in the :ref:`cdc_collection_tombstones` secti
User Defined Types
------------------

-Unsurprisingly, non-frozen UDTs are also special cases of non-frozen maps. This time the key type is ``smallint`` and the keys stand for field indices. When using UDTs one refers to the field names and Scylla translates them to keys (field indices) using schema internal definitions.
+Unsurprisingly, non-frozen UDTs are also special cases of non-frozen maps. This time the key type is ``smallint`` and the keys stand for field indices. When using UDTs one refers to the field names and ScyllaDB translates them to keys (field indices) using schema internal definitions.

Understanding the correspondence between field names and field indices is important when using CDC with non-frozen UDT columns. Everything then works the same as for maps.

@@ -691,7 +691,7 @@ result:

The index ``2`` corresponds to field ``c``, so the deleted elements set contains ``2``, as expected.

-Unfortunately, Scylla offers no syntax to operate directly on field indices; you can only perform writes using field names. Thus, to replay a CDC log entry which contains user type field deletions, you must manually translate the field indices to field names using the rule explained above.
+Unfortunately, ScyllaDB offers no syntax to operate directly on field indices; you can only perform writes using field names. Thus, to replay a CDC log entry which contains user type field deletions, you must manually translate the field indices to field names using the rule explained above.

Deleting and overwriting UDTs
+++++++++++++++++++++++++++++
diff --git a/docs/using-scylla/cdc/cdc-intro.rst b/docs/using-scylla/cdc/cdc-intro.rst
--- a/docs/using-scylla/cdc/cdc-intro.rst
+++ b/docs/using-scylla/cdc/cdc-intro.rst
@@ -50,7 +50,7 @@ Some examples where CDC may be beneficial:
* Implementing a notification system.
* In-flight analytics: looking for patterns in the changes in order to derive useful information, e.g. for fraud detection.

-In Scylla CDC is optional and enabled on a per-table basis. The history of changes made to a CDC-enabled table is stored in a separate associated table.
+In ScyllaDB CDC is optional and enabled on a per-table basis. The history of changes made to a CDC-enabled table is stored in a separate associated table.

Terminology
-----------
@@ -75,7 +75,7 @@ You can enable CDC when creating or altering a table using the ``cdc`` option, f
Using CDC with Applications
---------------------------

-When writing applications, you can now use our language specific libraries to simplify writing applications which will read from Scylla CDC.
+When writing applications, you can now use our language specific libraries to simplify writing applications which will read from ScyllaDB CDC.
The following libraries are available:

* `Go <https://github.com/scylladb/scylla-cdc-go>`_
@@ -85,7 +85,7 @@ The following libraries are available:
More information
----------------

-`Scylla University: Change Data Capture (CDC) lesson <https://university.scylladb.com/courses/data-modeling/lessons/change-data-capture-cdc/>`_ - Learn how to use CDC. Some of the topics covered are:
+`ScyllaDB University: Change Data Capture (CDC) lesson <https://university.scylladb.com/courses/data-modeling/lessons/change-data-capture-cdc/>`_ - Learn how to use CDC. Some of the topics covered are:

* An overview of Change Data Capture, what exactly is it, what are some common use cases, what does it do, and an overview of how it works
* How can that data be consumed? Different options for consuming the data changes including normal CQL, a layered approach, and integrators
diff --git a/docs/using-scylla/cdc/cdc-log-table.rst b/docs/using-scylla/cdc/cdc-log-table.rst
--- a/docs/using-scylla/cdc/cdc-log-table.rst
+++ b/docs/using-scylla/cdc/cdc-log-table.rst
@@ -17,7 +17,7 @@ Suppose you've created the following table:
PRIMARY KEY ((pk1, pk2), ck1, ck2)
) WITH cdc = {'enabled': true};

-Since CDC was enabled using ``WITH cdc = {'enabled':true}``, Scylla automatically creates the following log table:
+Since CDC was enabled using ``WITH cdc = {'enabled':true}``, ScyllaDB automatically creates the following log table:

.. code-block:: cql

@@ -81,7 +81,7 @@ The ``cdc$stream_id`` column, of type ``blob``, is the log table's partition key
When a change is performed in the base table, a stream identifier is chosen for the corresponding log entries depending on two things:

* the base write's partition key,
-* the currently operating **CDC generation** which is a global property of the Scylla cluster (similar to tokens).
+* the currently operating **CDC generation** which is a global property of the ScyllaDB cluster (similar to tokens).

Partitions in the log table are called *streams*; within one stream, all entries are sorted according to the base table writes' timestamps, using standard clustering key properties (note that ``cdc$time``, which represents the time of the write, is the first part of the clustering key).

@@ -92,10 +92,10 @@ Time column

The ``cdc$time`` column is the first part of the clustering key. The type of this column is ``timeuuid``, which represents a so-called *time-based UUID*, also called a *version 1 UUID*. A value of this type consists of two parts: a *timestamp*, and "the rest". In the case of a CDC log entry, the timestamp is equal to the timestamp of the corresponding write (more on that below), and the rest of the ``timeuuid`` value consists of randomly generated bytes so that writes with conflicting timestamps get separate entries in the log table.

-Digression: write timestamps in Scylla
-++++++++++++++++++++++++++++++++++++++
+Digression: write timestamps in ScyllaDB
+++++++++++++++++++++++++++++++++++++++++

-Each write in Scylla has a timestamp, or possibly multiple different timestamps (which is rare), used to order the write with respect to other writes, which might be performed concurrently. The timestamp can be:
+Each write in ScyllaDB has a timestamp, or possibly multiple different timestamps (which is rare), used to order the write with respect to other writes, which might be performed concurrently. The timestamp can be:

* specified by the user,
* generated by the used CQL driver,
@@ -253,7 +253,7 @@ returns:

(1 rows)

-``timeuuid`` values are compared in Scylla using the timestamp first, and the other bytes second. Thus, given two base writes whose corresponding log entries are in the same stream, the write with the higher timestamp will have its log entries appear after the lower timestamp write's log entries. If they have the same timestamp, the ordering will be chosen randomly (because the other bytes in the ``timeuuid`` are generated randomly).
+``timeuuid`` values are compared in ScyllaDB using the timestamp first, and the other bytes second. Thus, given two base writes whose corresponding log entries are in the same stream, the write with the higher timestamp will have its log entries appear after the lower timestamp write's log entries. If they have the same timestamp, the ordering will be chosen randomly (because the other bytes in the ``timeuuid`` are generated randomly).

Batch sequence number column
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -282,7 +282,7 @@ returns:

(2 rows)

-Observe that two entries have appeared, corresponding to the two updates. They have the same ``cdc$time`` value since they were performed in a single write and had the same timestamp. To distinguish between them, we use the ``cdc$batch_seq_no`` column. It is unspecified which update has its entries come first (in the example above, it is unspecified whether the ``ck = 0`` write or the ``ck = 1`` write will have ``cdc$batch_seq_no = 0``); from Scylla's point of view, it doesn't matter.
+Observe that two entries have appeared, corresponding to the two updates. They have the same ``cdc$time`` value since they were performed in a single write and had the same timestamp. To distinguish between them, we use the ``cdc$batch_seq_no`` column. It is unspecified which update has its entries come first (in the example above, it is unspecified whether the ``ck = 0`` write or the ``ck = 1`` write will have ``cdc$batch_seq_no = 0``); from ScyllaDB's point of view, it doesn't matter.

If you use different timestamps for the batch, the entries will have different timeuuids, so they won't be grouped like above:

diff --git a/docs/using-scylla/cdc/cdc-preimages.rst b/docs/using-scylla/cdc/cdc-preimages.rst
--- a/docs/using-scylla/cdc/cdc-preimages.rst
+++ b/docs/using-scylla/cdc/cdc-preimages.rst
@@ -18,7 +18,7 @@ The purpose of delta rows is to describe the write itself --- the `mutation` per

**Postimage rows** exist to show what the state of the row affected by the write is `after` the write. Postimages always describe the state of the entire row. They are constructed by combining the delta row with the full preimage row (including columns not affected by the write).

-.. caution:: in order to generate preimage rows for a given write, Scylla must perform a read before making the write. This increases latencies significantly. Furthermore, the read-then-write procedure is not atomic; between the read and the write a concurrent write may be performed. If the concurrent write modifies the same row and column that the preimage had read, the preimage's value will not be consistent with the order of writes as they appear in the CDC log. Preimages will only give "sensible" results if no concurrent writes are performed to the same row. They also heavily depend on monotonicity of clocks used to generate write timestamps. We will see some examples of what can go wrong in a later section. These remarks also apply to postimages, since they are computed from preimages.
+.. caution:: in order to generate preimage rows for a given write, ScyllaDB must perform a read before making the write. This increases latencies significantly. Furthermore, the read-then-write procedure is not atomic; between the read and the write a concurrent write may be performed. If the concurrent write modifies the same row and column that the preimage had read, the preimage's value will not be consistent with the order of writes as they appear in the CDC log. Preimages will only give "sensible" results if no concurrent writes are performed to the same row. They also heavily depend on monotonicity of clocks used to generate write timestamps. We will see some examples of what can go wrong in a later section. These remarks also apply to postimages, since they are computed from preimages.

Preimage rows
-------------
diff --git a/docs/using-scylla/cdc/cdc-querying-streams.rst b/docs/using-scylla/cdc/cdc-querying-streams.rst
--- a/docs/using-scylla/cdc/cdc-querying-streams.rst
+++ b/docs/using-scylla/cdc/cdc-querying-streams.rst
@@ -16,12 +16,12 @@ The recommended alternative is to query each stream separately:

SELECT * FROM ks.t_scylla_cdc_log WHERE "cdc$stream_id" = 0x365fd1a9ae34373954529ac8169dfb93;

-With the above approach you can, for instance, build a distributed CDC consumer, where each of the consumer nodes queries only streams that are replicated to Scylla nodes in proximity to the consumer node. This allows efficient, concurrent querying of streams, without putting strain on a single node due to a partition scan.
+With the above approach you can, for instance, build a distributed CDC consumer, where each of the consumer nodes queries only streams that are replicated to ScyllaDB nodes in proximity to the consumer node. This allows efficient, concurrent querying of streams, without putting strain on a single node due to a partition scan.

.. caution::
- The tables mentioned in the following sections: ``system_distributed.cdc_generation_timestamps`` and ``system_distributed.cdc_streams_descriptions_v2`` have been introduced in Scylla 4.4. It is highly recommended to upgrade to 4.4 for efficient CDC usage. The last section explains how to run the below examples in Scylla 4.3.
+ The tables mentioned in the following sections: ``system_distributed.cdc_generation_timestamps`` and ``system_distributed.cdc_streams_descriptions_v2`` have been introduced in ScyllaDB 4.4. It is highly recommended to upgrade to 4.4 for efficient CDC usage. The last section explains how to run the below examples in ScyllaDB 4.3.

- If you use CDC in Scylla 4.3 and your application is constantly querying CDC log tables and using the old description table to learn about new generations and stream IDs, you should upgrade your application before upgrading to 4.4. The upgraded application should dynamically switch from using the old description table to the new description tables when the cluster is upgraded from 4.3 to 4.4. We present an example algorithm that the application can perform in the last section.
+ If you use CDC in ScyllaDB 4.3 and your application is constantly querying CDC log tables and using the old description table to learn about new generations and stream IDs, you should upgrade your application before upgrading to 4.4. The upgraded application should dynamically switch from using the old description table to the new description tables when the cluster is upgraded from 4.3 to 4.4. We present an example algorithm that the application can perform in the last section.

We highly recommend using the newest releases of our client CDC libraries (`Java CDC library <https://github.com/scylladb/scylla-cdc-java>`_, `Go CDC library <https://github.com/scylladb/scylla-cdc-go>`_, `Rust CDC library <https://github.com/scylladb/scylla-cdc-rust>`_). They take care of correctly querying the stream description tables and they handle the upgrade procedure for you.

@@ -169,10 +169,10 @@ You should keep querying streams from generation ``2020-03-25 16:05:29.484000+00

and so on. After you make sure that every node uses the new generation, you can query streams from the previous generation one last time, and then switch to querying streams from the new generation.

-Differences in Scylla 4.3
--------------------------
+Differences in ScyllaDB 4.3
+---------------------------

-In Scylla 4.3 the tables ``cdc_generation_timestamps`` and ``cdc_streams_descriptions_v2`` don't exist. Instead there is the ``cdc_streams_descriptions`` table. To retrieve all generation timestamps, instead of querying the ``time`` column of ``cdc_generation_timestamps`` using a single-partition query (i.e. using ``WHERE key = 'timestamps'``), you would query the ``time`` column of ``cdc_streams_descriptions`` with a full range scan (without specifying a single partition):
+In ScyllaDB 4.3 the tables ``cdc_generation_timestamps`` and ``cdc_streams_descriptions_v2`` don't exist. Instead there is the ``cdc_streams_descriptions`` table. To retrieve all generation timestamps, instead of querying the ``time`` column of ``cdc_generation_timestamps`` using a single-partition query (i.e. using ``WHERE key = 'timestamps'``), you would query the ``time`` column of ``cdc_streams_descriptions`` with a full range scan (without specifying a single partition):

.. code-block:: cql

@@ -188,20 +188,20 @@ All stream IDs are stored in a single row, unlike ``cdc_streams_descriptions_v2`

.. _scylla-4-3-to-4-4-upgrade:

-Scylla 4.3 to Scylla 4.4 upgrade
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ScyllaDB 4.3 to ScyllaDB 4.4 upgrade
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-If you didn't enable CDC on any table while using Scylla 4.3 or earlier, you don't need to understand this section. Simply upgrade to 4.4 (we recommend doing it as soon as you can) and implement your application to query streams as described above.
+If you didn't enable CDC on any table while using ScyllaDB 4.3 or earlier, you don't need to understand this section. Simply upgrade to 4.4 (we recommend doing it as soon as you can) and implement your application to query streams as described above.

-However, if you use CDC with Scylla 4.3 and your application is periodically querying the old ``cdc_streams_descriptions`` table, you should upgrade your application *before* upgrading the cluster to Scylla 4.4.
+However, if you use CDC with ScyllaDB 4.3 and your application is periodically querying the old ``cdc_streams_descriptions`` table, you should upgrade your application *before* upgrading the cluster to ScyllaDB 4.4.

The upgraded application should understand both the old ``cdc_streams_descriptions`` table and the new ``cdc_generation_timestamps`` and ``cdc_streams_descriptions_v2`` tables. It should smoothly transition from querying the old table to querying the new tables as the cluster upgrades.

-When Scylla upgrades from 4.3 to 4.4 it will attempt to copy descriptions of all existing generations from the old table to the new tables. This copying procedure may take a while. Until it finishes, your application should keep using the old table; it should switch as soon as it detects that the procedure is finished. To detect that the procedure is finished, you can query the ``system.cdc_local`` table: if the table contains a row with ``key = 'rewritten'``, the procedure was finished; otherwise it is still in progress.
+When ScyllaDB upgrades from 4.3 to 4.4 it will attempt to copy descriptions of all existing generations from the old table to the new tables. This copying procedure may take a while. Until it finishes, your application should keep using the old table; it should switch as soon as it detects that the procedure is finished. To detect that the procedure is finished, you can query the ``system.cdc_local`` table: if the table contains a row with ``key = 'rewritten'``, the procedure was finished; otherwise it is still in progress.

-It is possible to disable the rewriting procedure. In that case only the latest generation will be inserted to the new table and your application should act accordingly (it shouldn't wait for the ``'rewritten'`` row to appear but start using the new tables immediately). It is not recommended to disable the rewriting procedure and we've purposefully left it undocumented how to do it. This option exists only for emergencies and should be used only with the assistance of a qualified Scylla engineer.
+It is possible to disable the rewriting procedure. In that case only the latest generation will be inserted to the new table and your application should act accordingly (it shouldn't wait for the ``'rewritten'`` row to appear but start using the new tables immediately). It is not recommended to disable the rewriting procedure and we've purposefully left it undocumented how to do it. This option exists only for emergencies and should be used only with the assistance of a qualified ScyllaDB engineer.

-In fresh Scylla 4.4 clusters (that were not upgraded from a previous version) the old description table does not exist. Thus the application should check for its existence and when it detects its absence, it should use the new tables immediately.
+In fresh ScyllaDB 4.4 clusters (that were not upgraded from a previous version) the old description table does not exist. Thus the application should check for its existence and when it detects its absence, it should use the new tables immediately.

With the above considerations in mind, the application should behave as follows. When it wants to learn if there are new generations:

diff --git a/docs/using-scylla/cdc/cdc-stream-generations.rst b/docs/using-scylla/cdc/cdc-stream-generations.rst
--- a/docs/using-scylla/cdc/cdc-stream-generations.rst
+++ b/docs/using-scylla/cdc/cdc-stream-generations.rst
@@ -25,7 +25,7 @@ A CDC generation consists of:
This is the mapping used to decide on which stream IDs to use when making writes, as explained in the :doc:`./cdc-streams` document. It is a global property of the cluster: it doesn't depend on the table you're making writes to.

.. caution::
- The tables mentioned in the following sections: ``system_distributed.cdc_generation_timestamps`` and ``system_distributed.cdc_streams_descriptions_v2`` have been introduced in Scylla 4.4. It is highly recommended to upgrade to 4.4 for efficient CDC usage. The last section explains how to run the below examples in Scylla 4.3.
+ The tables mentioned in the following sections: ``system_distributed.cdc_generation_timestamps`` and ``system_distributed.cdc_streams_descriptions_v2`` have been introduced in ScyllaDB 4.4. It is highly recommended to upgrade to 4.4 for efficient CDC usage. The last section explains how to run the below examples in ScyllaDB 4.3.

When CDC generations change
---------------------------
@@ -140,7 +140,7 @@ Suppose a node was started at 17:59:35 UTC+1 time, as reported by the node's log

.. code-block:: none

- INFO 2020-02-06 17:59:35,087 [shard 0] init - Scylla version 666.development-0.20200206.9eae0b57a with build-id 052adc1eb0601af2 starting ...
+ INFO 2020-02-06 17:59:35,087 [shard 0] init - ScyllaDB version 666.development-0.20200206.9eae0b57a with build-id 052adc1eb0601af2 starting ...

You immediately connected to the node using cqlsh and queried the ``cdc_generation_timestamps`` table:

@@ -158,7 +158,7 @@ The result was:

(1 rows)

-This generation's timestamp is ``17:00:43 UTC time`` (timestamp columns in Scylla always show the timestamp as a UTC time-date), which is a little more than a minute later compared to the node's startup time (which was ``16:59:35 UTC time``).
+This generation's timestamp is ``17:00:43 UTC time`` (timestamp columns in ScyllaDB always show the timestamp as a UTC time-date), which is a little more than a minute later compared to the node's startup time (which was ``16:59:35 UTC time``).

If you then immediately create a CDC-enabled table and attempt to make an insert:

@@ -176,18 +176,18 @@ the result will be an error message:

If you see a message like that, it doesn't necessarily mean something is wrong, as it may simply mean that the first generation hasn't started operating yet. If you wait for about a minute, you should be able to write to a CDC-enabled table.

-You may also see this message if you were running a cluster with an old version of Scylla (which didn't support CDC) and started a rolling upgrade.
+You may also see this message if you were running a cluster with an old version of ScyllaDB (which didn't support CDC) and started a rolling upgrade.
Make sure to upgrade all nodes **before** you start doing CDC writes: one of the nodes will be responsible for creating the first CDC generation and informing other nodes about it.

-Differences in Scylla 4.3
--------------------------
+Differences in ScyllaDB 4.3
+---------------------------

-In Scylla 4.3 the tables ``cdc_generation_timestamps`` and ``cdc_streams_descriptions_v2`` don't exist. Instead there is the ``cdc_streams_descriptions`` table. To retrieve all generation timestamps, instead of querying the ``time`` column of ``cdc_generation_timestamps`` using a single-partition query (i.e. using ``WHERE key = 'timestamps'``), you would query the ``time`` column of ``cdc_streams_descriptions`` with a full range scan (without specifying a single partition):
+In ScyllaDB 4.3 the tables ``cdc_generation_timestamps`` and ``cdc_streams_descriptions_v2`` don't exist. Instead there is the ``cdc_streams_descriptions`` table. To retrieve all generation timestamps, instead of querying the ``time`` column of ``cdc_generation_timestamps`` using a single-partition query (i.e. using ``WHERE key = 'timestamps'``), you would query the ``time`` column of ``cdc_streams_descriptions`` with a full range scan (without specifying a single partition):

.. code-block:: cql

SELECT time FROM system_distributed.cdc_streams_descriptions;

-Unfortunately, the ``time`` column is the partition key column of this table. Therefore the values are not sorted, unlike the values of the ``time`` column of the ``cdc_generation_timestamps`` table (in which ``time`` is the clustering key). You will have to sort them yourselves in order to learn the timestamp of the last generation. Furthermore, querying the table with a full range scan like above requires the coordinator to contact the entire cluster, potentially increasing resource usage and latency. Thus we recommend upgrading to Scylla 4.4 and use the new description tables instead.
+Unfortunately, the ``time`` column is the partition key column of this table. Therefore the values are not sorted, unlike the values of the ``time`` column of the ``cdc_generation_timestamps`` table (in which ``time`` is the clustering key). You will have to sort them yourselves in order to learn the timestamp of the last generation. Furthermore, querying the table with a full range scan like above requires the coordinator to contact the entire cluster, potentially increasing resource usage and latency. Thus we recommend upgrading to ScyllaDB 4.4 and use the new description tables instead.

.. TODO: CDC generation expiration
diff --git a/docs/using-scylla/cdc/cdc-streams.rst b/docs/using-scylla/cdc/cdc-streams.rst
--- a/docs/using-scylla/cdc/cdc-streams.rst
+++ b/docs/using-scylla/cdc/cdc-streams.rst
@@ -3,7 +3,7 @@ CDC Streams
===========

Streams are partitions in CDC log tables. They are identified by their keys: *stream identifiers*.
-When you perform a base table write, Scylla chooses a stream ID for the corresponding CDC log entries based on two things:
+When you perform a base table write, ScyllaDB chooses a stream ID for the corresponding CDC log entries based on two things:

* the currently operating *CDC generation* (:doc:`./cdc-stream-generations`),
* the base write's partition key.
@@ -38,7 +38,7 @@ returns:

Observe that in the example above, all base writes made to partition ``0`` were sent to the same stream. The same is true for all base writes made to partition ``1``.

-Underneath, Scylla uses the token of the base write's partition key to decide the stream ID.
+Underneath, ScyllaDB uses the token of the base write's partition key to decide the stream ID.
It stores a mapping from the token ring (the set of all tokens, which are 64-bit integers) to the set of stream IDs associated with the currently operating CDC generation.
Thus, choosing a stream proceeds in two steps:

@@ -71,7 +71,7 @@ returns:

.. note:: For a given stream there is no straightforward way to find a partition key which will get mapped to this stream, because of the partitioner, which uses the murmur3 hash function underneath (the truth is you can efficiently find such a key, as murmur3 is not a cryptographic hash, but it's not completely obvious).

-The set of used stream IDs is independent from the table. It's a global property of the Scylla cluster:
+The set of used stream IDs is independent from the table. It's a global property of the ScyllaDB cluster:

.. code-block:: cql

diff --git a/docs/using-scylla/drivers/cql-drivers/index.rst b/docs/using-scylla/drivers/cql-drivers/index.rst
--- a/docs/using-scylla/drivers/cql-drivers/index.rst
+++ b/docs/using-scylla/drivers/cql-drivers/index.rst
@@ -20,7 +20,7 @@ We recommend using ScyllaDB drivers. All ScyllaDB drivers are shard-aware and pr
benefits over third-party drivers.

ScyllaDB supports the CQL binary protocol version 3, so any Apache Cassandra/CQL driver that implements
-the same version works with Scylla.
+the same version works with ScyllaDB.

The following table lists the available ScyllaDB drivers, specifying which support
`ScyllaDB Cloud Serversless <https://cloud.docs.scylladb.com/stable/serverless/index.html>`_
diff --git a/docs/using-scylla/drivers/cql-drivers/scylla-cpp-driver.rst b/docs/using-scylla/drivers/cql-drivers/scylla-cpp-driver.rst
--- a/docs/using-scylla/drivers/cql-drivers/scylla-cpp-driver.rst
+++ b/docs/using-scylla/drivers/cql-drivers/scylla-cpp-driver.rst
@@ -1,16 +1,16 @@
-====================
-Scylla C++ Driver
-====================
+===================
+ScyllaDB C++ Driver
+===================

-The Scylla C++ driver is a modern, feature-rich and **shard-aware** C/C++ client library for ScyllaDB using exclusively Cassandra’s binary protocol and Cassandra Query Language v3.
+The ScyllaDB C++ driver is a modern, feature-rich and **shard-aware** C/C++ client library for ScyllaDB using exclusively Cassandra’s binary protocol and Cassandra Query Language v3.
This driver is forked from Datastax cpp-driver.

-Read the `documentation <https://cpp-driver.docs.scylladb.com>`_ to get started or visit the Github project `Scylla C++ driver <https://github.com/scylladb/cpp-driver>`_.
+Read the `documentation <https://cpp-driver.docs.scylladb.com>`_ to get started or visit the Github project `ScyllaDB C++ driver <https://github.com/scylladb/cpp-driver>`_.


More Information
----------------

* `C++ Driver Documentation <https://cpp-driver.docs.scylladb.com>`_
-* `C/C++ Driver course at Scylla University <https://university.scylladb.com/courses/using-scylla-drivers/lessons/cpp-driver-part-1/>`_
-* `Blog: A Shard-Aware Scylla C/C++ Driver <https://www.scylladb.com/2021/03/18/a-shard-aware-scylla-c-c-driver/>`_
+* `C/C++ Driver course at ScyllaDB University <https://university.scylladb.com/courses/using-scylla-drivers/lessons/cpp-driver-part-1/>`_
+* `Blog: A Shard-Aware ScyllaDB C/C++ Driver <https://www.scylladb.com/2021/03/18/a-shard-aware-scylla-c-c-driver/>`_
diff --git a/docs/using-scylla/drivers/cql-drivers/scylla-go-driver.rst b/docs/using-scylla/drivers/cql-drivers/scylla-go-driver.rst
--- a/docs/using-scylla/drivers/cql-drivers/scylla-go-driver.rst
+++ b/docs/using-scylla/drivers/cql-drivers/scylla-go-driver.rst
@@ -1,14 +1,14 @@
-================
-Scylla Go Driver
-================
+==================
+ScyllaDB Go Driver
+==================

-The `Scylla Go driver <https://github.com/scylladb/gocql>`_ is shard aware and contains extensions for a tokenAwareHostPolicy supported by Scylla 2.3 and onwards.
-It is is a fork of the `GoCQL Driver <https://github.com/gocql/gocql>`_ but has been enhanced with capabilities that take advantage of Scylla's unique architecture.
+The `ScyllaDB Go driver <https://github.com/scylladb/gocql>`_ is shard aware and contains extensions for a tokenAwareHostPolicy supported by ScyllaDB 2.3 and onwards.
+It is is a fork of the `GoCQL Driver <https://github.com/gocql/gocql>`_ but has been enhanced with capabilities that take advantage of ScyllaDB's unique architecture.
Using this policy, the driver can select a connection to a particular shard based on the shard’s token.
As a result, latency is significantly reduced because there is no need to pass data between the shards.

The protocol extension spec is `available here <https://github.com/scylladb/scylla/blob/master/docs/dev/protocol-extensions.md>`_.
-The Scylla Go Driver is a drop-in replacement for gocql.
+The ScyllaDB Go Driver is a drop-in replacement for gocql.
As such, no code changes are needed to use this driver.
All you need to do is rebuild using the ``replace`` directive in your ``mod`` file.

@@ -18,11 +18,11 @@ All you need to do is rebuild using the ``replace`` directive in your ``mod`` fi
Using CDC with Go
-----------------

-When writing applications, you can now use our `Go Library <https://github.com/scylladb/scylla-cdc-go>`_ to simplify writing applications that read from Scylla CDC.
+When writing applications, you can now use our `Go Library <https://github.com/scylladb/scylla-cdc-go>`_ to simplify writing applications that read from ScyllaDB CDC.

More information
----------------

-* `Scylla Gocql Driver project page on GitHub <https://github.com/scylladb/gocql>`_ - contains the source code as well as a readme and documentation files.
+* `ScyllaDB Gocql Driver project page on GitHub <https://github.com/scylladb/gocql>`_ - contains the source code as well as a readme and documentation files.
* `ScyllaDB University: Golang and ScyllaDB <https://university.scylladb.com/courses/using-scylla-drivers/lessons/golang-and-scylla-part-1/>`_
A three-part lesson with in-depth examples from executing a few basic CQL statements with a ScyllaDB cluster using the Gocql driver, to the different data types that you can use in your database tables and how to store these binary files in ScyllaDB with a simple Go application.
diff --git a/docs/using-scylla/drivers/cql-drivers/scylla-gocqlx-driver.rst b/docs/using-scylla/drivers/cql-drivers/scylla-gocqlx-driver.rst
--- a/docs/using-scylla/drivers/cql-drivers/scylla-gocqlx-driver.rst
+++ b/docs/using-scylla/drivers/cql-drivers/scylla-gocqlx-driver.rst
@@ -1,8 +1,8 @@
-==========================
-Scylla Gocql Extension
-==========================
+=========================
+ScyllaDB Gocql Extension
+=========================

-The Scylla Gocqlx is an extension to gocql that provides usability features.
+The ScyllaDB Gocqlx is an extension to gocql that provides usability features.
With gocqlx, you can bind the query parameters from maps and structs, use named query parameters (``:identifier``), and scan the query results into structs and slices.
The driver includes a fluent and flexible CQL query builder and a database migrations module.

@@ -11,6 +11,6 @@ The driver includes a fluent and flexible CQL query builder and a database migra
More information
----------------

-* `Scylla Gocqlx Driver project page on GitHub <https://github.com/scylladb/gocqlx>`_ - contains the source code as well as a readme and documentation files.
-* `ScyllaDB University: Golang and Scylla Part 3 – GoCQLX <https://university.scylladb.com/courses/using-scylla-drivers/lessons/golang-and-scylla-part-3-gocqlx/>`_ - part three of the Golang three-part course which focuses on how to create a sample Go application that executes a few basic CQL statements with a Scylla cluster using the GoCQLX package
+* `ScyllaDB Gocqlx Driver project page on GitHub <https://github.com/scylladb/gocqlx>`_ - contains the source code as well as a readme and documentation files.
+* `ScyllaDB University: Golang and ScyllaDB Part 3 – GoCQLX <https://university.scylladb.com/courses/using-scylla-drivers/lessons/golang-and-scylla-part-3-gocqlx/>`_ - part three of the Golang three-part course which focuses on how to create a sample Go application that executes a few basic CQL statements with a ScyllaDB cluster using the GoCQLX package

diff --git a/docs/using-scylla/drivers/cql-drivers/scylla-java-driver.rst b/docs/using-scylla/drivers/cql-drivers/scylla-java-driver.rst
--- a/docs/using-scylla/drivers/cql-drivers/scylla-java-driver.rst
+++ b/docs/using-scylla/drivers/cql-drivers/scylla-java-driver.rst
@@ -1,31 +1,31 @@
-==================
-Scylla Java Driver
-==================
+=====================
+ScyllaDB Java Driver
+=====================

-Scylla Java Driver is forked from `DataStax Java Driver <https://github.com/datastax/java-driver>`_ with enhanced capabilities, taking advantage of Scylla's unique architecture.
+ScyllaDB Java Driver is forked from `DataStax Java Driver <https://github.com/datastax/java-driver>`_ with enhanced capabilities, taking advantage of ScyllaDB's unique architecture.

-The Scylla Java driver is shard aware and contains extensions for a ``tokenAwareHostPolicy``.
+The ScyllaDB Java driver is shard aware and contains extensions for a ``tokenAwareHostPolicy``.
Using this policy, the driver can select a connection to a particular shard based on the shard’s token.
As a result, latency is significantly reduced because there is no need to pass data between the shards.

-Use the Scylla Java driver for better compatibility and support for Scylla with Java-based applications.
+Use the ScyllaDB Java driver for better compatibility and support for ScyllaDB with Java-based applications.

Read the `documentation <https://java-driver.docs.scylladb.com/>`_ to get started or visit the `Github project <https://github.com/scylladb/java-driver>`_.

The driver architecture is based on layers. At the bottom lies the driver core.
-This core handles everything related to the connections to a Scylla cluster (for example, connection pool, discovering new nodes, etc.) and exposes a simple, relatively low-level API on top of which higher-level layers can be built.
+This core handles everything related to the connections to a ScyllaDB cluster (for example, connection pool, discovering new nodes, etc.) and exposes a simple, relatively low-level API on top of which higher-level layers can be built.

-The Scylla Java Driver is a drop-in replacement for the DataStax Java Driver.
+The ScyllaDB Java Driver is a drop-in replacement for the DataStax Java Driver.
As such, no code changes are needed to use this driver.

Using CDC with Java
-------------------

-When writing applications, you can now use our `Java Library <https://github.com/scylladb/scylla-cdc-java>`_ to simplify writing applications that read from Scylla CDC.
+When writing applications, you can now use our `Java Library <https://github.com/scylladb/scylla-cdc-java>`_ to simplify writing applications that read from ScyllaDB CDC.

More information
----------------
-* `Scylla Java Driver Docs <https://java-driver.docs.scylladb.com/>`_
-* `Scylla Java Driver project page on GitHub <https://github.com/scylladb/java-driver/>`_ - Source Code
+* `ScyllaDB Java Driver Docs <https://java-driver.docs.scylladb.com/>`_
+* `ScyllaDB Java Driver project page on GitHub <https://github.com/scylladb/java-driver/>`_ - Source Code
* `ScyllaDB University: Coding with Java <https://university.scylladb.com/courses/using-scylla-drivers/lessons/coding-with-java-part-1/>`_ - a three-part lesson with in-depth examples from executing a few basic CQL statements with a ScyllaDB cluster using the Java driver, to the different data types that you can use in your database tables and how to store these binary files in ScyllaDB with a simple Java application.

diff --git a/docs/using-scylla/drivers/cql-drivers/scylla-python-driver.rst b/docs/using-scylla/drivers/cql-drivers/scylla-python-driver.rst
--- a/docs/using-scylla/drivers/cql-drivers/scylla-python-driver.rst
+++ b/docs/using-scylla/drivers/cql-drivers/scylla-python-driver.rst
@@ -1,20 +1,20 @@
-====================
-Scylla Python Driver
-====================
+======================
+ScyllaDB Python Driver
+======================

-The Scylla Python driver is shard aware and contains extensions for a ``tokenAwareHostPolicy``.
+The ScyllaDB Python driver is shard aware and contains extensions for a ``tokenAwareHostPolicy``.
Using this policy, the driver can select a connection to a particular shard based on the shard’s token.
As a result, latency is significantly reduced because there is no need to pass data between the shards.

-Read the `documentation <https://python-driver.docs.scylladb.com/>`_ to get started or visit the Github project `Scylla Python driver <https://github.com/scylladb/python-driver/>`_.
+Read the `documentation <https://python-driver.docs.scylladb.com/>`_ to get started or visit the Github project `ScyllaDB Python driver <https://github.com/scylladb/python-driver/>`_.

-As the Scylla Python Driver is a drop-in replacement for DataStax Python Driver, no code changes are needed to use the driver.
-Use the Scylla Python driver for better compatibility and support for Scylla with Python-based applications.
+As the ScyllaDB Python Driver is a drop-in replacement for DataStax Python Driver, no code changes are needed to use the driver.
+Use the ScyllaDB Python driver for better compatibility and support for ScyllaDB with Python-based applications.


More information
----------------

-* `Scylla Python Driver Documentation <https://python-driver.docs.scylladb.com/>`_
-* `Scylla Python Driver on GitHub <https://github.com/scylladb/python-driver/>`_
+* `ScyllaDB Python Driver Documentation <https://python-driver.docs.scylladb.com/>`_
+* `ScyllaDB Python Driver on GitHub <https://github.com/scylladb/python-driver/>`_
* `ScyllaDB University: Coding with Python <https://university.scylladb.com/courses/using-scylla-drivers/lessons/coding-with-python/>`_
diff --git a/docs/using-scylla/drivers/dynamo-drivers/index.rst b/docs/using-scylla/drivers/dynamo-drivers/index.rst
--- a/docs/using-scylla/drivers/dynamo-drivers/index.rst
+++ b/docs/using-scylla/drivers/dynamo-drivers/index.rst
@@ -5,5 +5,5 @@ AWS DynamoDB Drivers



-Scylla AWS DynamoDB Compatible API can be used with any AWS DynamoDB Driver.
+ScyllaDB AWS DynamoDB Compatible API can be used with any AWS DynamoDB Driver.
For a list of AWS AWS DynamoDB drivers see `here <https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.html>`_
diff --git a/docs/using-scylla/drivers/index.rst b/docs/using-scylla/drivers/index.rst
--- a/docs/using-scylla/drivers/index.rst
+++ b/docs/using-scylla/drivers/index.rst
@@ -1,21 +1,21 @@
-==============
-Scylla Drivers
-==============
+================
+ScyllaDB Drivers
+================

.. toctree::
:titlesonly:
:hidden:

- Scylla CQL Drivers <cql-drivers/index>
- Scylla DynamoDB Drivers <dynamo-drivers/index>
+ ScyllaDB CQL Drivers <cql-drivers/index>
+ ScyllaDB DynamoDB Drivers <dynamo-drivers/index>



-You can use Scylla with:
+You can use ScyllaDB with:

* :doc:`Apache Cassandra CQL Compatible Drivers <cql-drivers/index>`
* :doc:`Amazon DynamoDB Compatible API Drivers <dynamo-drivers/index>`

Additional drivers coming soon!

-If you are looking for a Scylla Integration Solution or a Connector, refer to :doc:`Scylla Integrations </using-scylla/integrations/index>`.
+If you are looking for a ScyllaDB Integration Solution or a Connector, refer to :doc:`ScyllaDB Integrations </using-scylla/integrations/index>`.
diff --git a/docs/using-scylla/index.rst b/docs/using-scylla/index.rst
--- a/docs/using-scylla/index.rst
+++ b/docs/using-scylla/index.rst
@@ -37,7 +37,7 @@ ScyllaDB for Developers
:id: "getting-started"
:class: my-panel

- * :doc:`ScyllaDB Tools </operating-scylla/admin-tools/index>` - Tools for testing and integrating with Scylla
+ * :doc:`ScyllaDB Tools </operating-scylla/admin-tools/index>` - Tools for testing and integrating with ScyllaDB
* :doc:`cqlsh </cql/cqlsh>` - A command line shell for interacting with ScyllaDB through CQL


diff --git a/docs/using-scylla/integrations/index.rst b/docs/using-scylla/integrations/index.rst
--- a/docs/using-scylla/integrations/index.rst
+++ b/docs/using-scylla/integrations/index.rst
@@ -1,6 +1,6 @@
-==================================
-Scylla Integrations and Connectors
-==================================
+====================================
+ScyllaDB Integrations and Connectors
+====================================


.. toctree::
@@ -23,36 +23,36 @@ Scylla Integrations and Connectors
integration-mindsdb

.. panel-box::
- :title: Scylla Integrations
+ :title: ScyllaDB Integrations
:id: "getting-started"
:class: my-panel


- Scylla is Apache Cassandra compatible at the CQL binary protocol level, and any driver which uses CQL will work with Scylla (more :doc:`here </using-scylla/drivers/index>`).
- Any application which uses a CQL driver will work with Scylla.
+ ScyllaDB is Apache Cassandra compatible at the CQL binary protocol level, and any driver which uses CQL will work with ScyllaDB (more :doc:`here </using-scylla/drivers/index>`).
+ Any application which uses a CQL driver will work with ScyllaDB.

- The list below contains links to integration projects using Scylla with third-party projects.
- If you have tested your application with Scylla and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.
+ The list below contains links to integration projects using ScyllaDB with third-party projects.
+ If you have tested your application with ScyllaDB and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.


- * :doc:`Integrate Scylla with Spark <integration-spark>`
- * :doc:`Integrate Scylla with KairosDB <integration-kairos>`
- * :doc:`Integrate Scylla with Presto <integration-presto>`
- * :doc:`Integrate Scylla with Elasticsearch <integration-elasticsearch>`
- * :doc:`Integrate Scylla with Kubernetes <integration-k8>`
- * :doc:`Integrate Scylla with JanusGraph <integration-janus>`
- * :doc:`Integrate Scylla with DataDog <integration-datadog>`
- * :doc:`Integrate Scylla with Apache Kafka <integration-kafka>`
- * :doc:`Integrate Scylla with IOTA <integration-iota>`
- * :doc:`Integrate Scylla with Spring <integration-spring>`
- * :doc:`Install Scylla with Ansible <integration-ansible>`
- * :doc:`Integrate Scylla with Databricks <integration-databricks>`
- * :doc:`Integrate Scylla with Jaeger Server <integration-jaeger>`
- * :doc:`Integrate Scylla with MindsDB <integration-mindsdb>`
+ * :doc:`Integrate ScyllaDB with Spark <integration-spark>`
+ * :doc:`Integrate ScyllaDB with KairosDB <integration-kairos>`
+ * :doc:`Integrate ScyllaDB with Presto <integration-presto>`
+ * :doc:`Integrate ScyllaDB with Elasticsearch <integration-elasticsearch>`
+ * :doc:`Integrate ScyllaDB with Kubernetes <integration-k8>`
+ * :doc:`Integrate ScyllaDB with JanusGraph <integration-janus>`
+ * :doc:`Integrate ScyllaDB with DataDog <integration-datadog>`
+ * :doc:`Integrate ScyllaDB with Apache Kafka <integration-kafka>`
+ * :doc:`Integrate ScyllaDB with IOTA <integration-iota>`
+ * :doc:`Integrate ScyllaDB with Spring <integration-spring>`
+ * :doc:`Install ScyllaDB with Ansible <integration-ansible>`
+ * :doc:`Integrate ScyllaDB with Databricks <integration-databricks>`
+ * :doc:`Integrate ScyllaDB with Jaeger Server <integration-jaeger>`
+ * :doc:`Integrate ScyllaDB with MindsDB <integration-mindsdb>`

.. panel-box::
- :title: Scylla Connectors
+ :title: ScyllaDB Connectors
:id: "getting-started"
:class: my-panel

- * :doc:`Scylla Kafka Sink Connector </using-scylla/integrations/sink-kafka-connector/>`
+ * :doc:`ScyllaDB Kafka Sink Connector </using-scylla/integrations/sink-kafka-connector/>`
diff --git a/docs/using-scylla/integrations/integration-ansible.rst b/docs/using-scylla/integrations/integration-ansible.rst
--- a/docs/using-scylla/integrations/integration-ansible.rst
+++ b/docs/using-scylla/integrations/integration-ansible.rst
@@ -1,17 +1,17 @@
=============================
-Install Scylla with Ansible
+Install ScyllaDB with Ansible
=============================

-You can use the Ansible roles and the playbook examples that follow to deploy and maintain Scylla clusters.
-There are roles for creating a Scylla cluster, a Scylla Manager, Scylla Monitoring Stack, and a Loader.
+You can use the Ansible roles and the playbook examples that follow to deploy and maintain ScyllaDB clusters.
+There are roles for creating a ScyllaDB cluster, a ScyllaDB Manager, ScyllaDB Monitoring Stack, and a Loader.
These roles can be used independently or together, using each role's outputs.
-You can use these roles with Scylla (Open Source and Enterprise), Scylla Manager, and Scylla Monitoring Stack.
+You can use these roles with ScyllaDB (Open Source and Enterprise), ScyllaDB Manager, and ScyllaDB Monitoring Stack.

To get started, visit the `GitHub project <https://github.com/scylladb/scylla-ansible-roles/>`_.


Additional Topics
-----------------
-* `Deploying a Scylla Cluster from Scratch <https://github.com/scylladb/scylla-ansible-roles/wiki/ansible-scylla-node:-Deploying-a-Scylla-cluster>`_ - This guide will follow the steps required to deploy a Scylla cluster using the ansible-scylla-node role
-* `Scylla Manager and Ansible Integration <https://github.com/scylladb/scylla-ansible-roles/wiki/ansible-scylla-manager:-Deploying-Scylla-Manager-and-connecting-it-to-a-cluster>`_
-* `Scylla Monitoring Stack Integration <https://github.com/scylladb/scylla-ansible-roles/wiki/Deploying-Scylla-Monitoring-and-connecting-it-to-a-Scylla-Cluster>`_
+* `Deploying a ScyllaDB Cluster from Scratch <https://github.com/scylladb/scylla-ansible-roles/wiki/ansible-scylla-node:-Deploying-a-ScyllaDB-cluster>`_ - This guide will follow the steps required to deploy a ScyllaDB cluster using the ansible-scylla-node role
+* `ScyllaDB Manager and Ansible Integration <https://github.com/scylladb/scylla-ansible-roles/wiki/ansible-scylla-manager:-Deploying-ScyllaDB-Manager-and-connecting-it-to-a-cluster>`_
+* `ScyllaDB Monitoring Stack Integration <https://github.com/scylladb/scylla-ansible-roles/wiki/Deploying-ScyllaDB-Monitoring-and-connecting-it-to-a-ScyllaDB-Cluster>`_
diff --git a/docs/using-scylla/integrations/integration-databricks.rst b/docs/using-scylla/integrations/integration-databricks.rst
--- a/docs/using-scylla/integrations/integration-databricks.rst
+++ b/docs/using-scylla/integrations/integration-databricks.rst
@@ -1,22 +1,22 @@
-================================
-Integrate Scylla with Databricks
-================================
+==================================
+Integrate ScyllaDB with Databricks
+==================================

-Scylla is Apache Cassandra compatible at the CQL binary protocol level, and any driver which uses CQL will work with Scylla (more :doc:`here </using-scylla/drivers/index>`). Any application which uses a CQL driver will work with Scylla, for example, Databricks Spark cluster.
+ScyllaDB is Apache Cassandra compatible at the CQL binary protocol level, and any driver which uses CQL will work with ScyllaDB (more :doc:`here </using-scylla/drivers/index>`). Any application which uses a CQL driver will work with ScyllaDB, for example, Databricks Spark cluster.

Resource list
-------------
Although your requirements may be different, this example uses the following resources:

-* Scylla cluster
+* ScyllaDB cluster
* Databricks account

Integration instructions
------------------------

**Before you begin**

-Verify that you have installed Scylla and know the Scylla server IP address.
+Verify that you have installed ScyllaDB and know the ScyllaDB server IP address.
Make sure you have a connection on port 9042:

.. code-block:: none
@@ -52,7 +52,7 @@ Spark config:

**Test case**

-1. Prepare test data [Scylla]:
+1. Prepare test data [ScyllaDB]:

.. code-block:: none

diff --git a/docs/using-scylla/integrations/integration-datadog.rst b/docs/using-scylla/integrations/integration-datadog.rst
--- a/docs/using-scylla/integrations/integration-datadog.rst
+++ b/docs/using-scylla/integrations/integration-datadog.rst
@@ -1,23 +1,23 @@
-==============================
-Integrate Scylla with DataDog
-==============================
+===============================
+Integrate ScyllaDB with DataDog
+===============================

-Datadog is a popular SaaS monitoring service. The default `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_ for Scylla is based on Prometheus and Grafana. You can export metrics from this stack and into DataDog, using it to monitor Scylla.
+Datadog is a popular SaaS monitoring service. The default `ScyllaDB Monitoring Stack <https://monitoring.docs.scylladb.com/stable/>`_ for ScyllaDB is based on Prometheus and Grafana. You can export metrics from this stack and into DataDog, using it to monitor ScyllaDB.

The way to do so is running a DataDog Agent to pull metrics from Prometheus and push it to DataDog as follows:

.. image:: images/datadog-arch.png
:align: left
:alt: scylla and datadog solution

-If you are a Scylla Cloud user, you can export your cluster metrics to your own Prometheus and use the same method to export the metrics from Prometheus to DataDog, effectively monitoring your Scylla Cloud cluster with DataDog.
+If you are a ScyllaDB Cloud user, you can export your cluster metrics to your own Prometheus and use the same method to export the metrics from Prometheus to DataDog, effectively monitoring your ScyllaDB Cloud cluster with DataDog.


-The list below contains integration projects using DataDog to monitor Scylla. If you have monitored Scylla with DataDog and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.
+The list below contains integration projects using DataDog to monitor ScyllaDB. If you have monitored ScyllaDB with DataDog and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.

Additional Topics
-----------------

-* `Monitoring Scylla with Datadog: A Tale about Datadog – Prometheus integration <https://www.scylladb.com/2019/10/02/monitoring-scylla-with-datadog-a-tale-about-datadog-prometheus-integration/>`_
-* `Scylla Integration Page on Datadog's website <https://docs.datadoghq.com/integrations/scylla/>`_
+* `Monitoring ScyllaDB with Datadog: A Tale about Datadog – Prometheus integration <https://www.scylladb.com/2019/10/02/monitoring-scylla-with-datadog-a-tale-about-datadog-prometheus-integration/>`_
+* `ScyllaDB Integration Page on Datadog's website <https://docs.datadoghq.com/integrations/scylla/>`_
* `Datadog Blog <https://www.datadoghq.com/blog/monitor-scylla-with-datadog/>`_
diff --git a/docs/using-scylla/integrations/integration-elasticsearch.rst b/docs/using-scylla/integrations/integration-elasticsearch.rst
--- a/docs/using-scylla/integrations/integration-elasticsearch.rst
+++ b/docs/using-scylla/integrations/integration-elasticsearch.rst
@@ -1,20 +1,20 @@
-===================================
-Integrate Scylla with Elasticsearch
-===================================
+=====================================
+Integrate ScyllaDB with Elasticsearch
+=====================================


-Scylla is Apache Cassandra compatible at the CQL binary protocol level, and any driver which uses CQL will work with Scylla (more :doc:`here </using-scylla/drivers/index>`). Any application which uses a CQL driver will work with Scylla.
+ScyllaDB is Apache Cassandra compatible at the CQL binary protocol level, and any driver which uses CQL will work with ScyllaDB (more :doc:`here </using-scylla/drivers/index>`). Any application which uses a CQL driver will work with ScyllaDB.

-The list below contains integration projects using Scylla with Elasticsearch. If you have tested your application with Scylla and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.
+The list below contains integration projects using ScyllaDB with Elasticsearch. If you have tested your application with ScyllaDB and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.


-* `Scylla and Elasticsearch Part One: Making the (Use) Case for Both <https://www.scylladb.com/2018/11/28/scylla-and-elasticsearch-part-one/>`_
+* `ScyllaDB and Elasticsearch Part One: Making the (Use) Case for Both <https://www.scylladb.com/2018/11/28/scylla-and-elasticsearch-part-one/>`_

-* `Scylla and Elasticsearch, Part Two: Practical Examples to Support Full-Text Search Workloads <https://www.scylladb.com/2019/03/07/scylla-and-elasticsearch-part-two-practical-examples-to-support-full-text-search-workloads/>`_
+* `ScyllaDB and Elasticsearch, Part Two: Practical Examples to Support Full-Text Search Workloads <https://www.scylladb.com/2019/03/07/scylla-and-elasticsearch-part-two-practical-examples-to-support-full-text-search-workloads/>`_

-* `Data Analytics with Elasticsearch and Scylla <https://www.scylladb.com/2017/08/03/data-analytics-elastic-scylla/>`_
+* `Data Analytics with Elasticsearch and ScyllaDB <https://www.scylladb.com/2017/08/03/data-analytics-elastic-scylla/>`_

-* `Zenly Discusses Going from Elasticsearch to Scylla at Scylla Summit 2017 <https://www.scylladb.com/2017/10/06/zenly-elasticsearch-scylla/>`_
+* `Zenly Discusses Going from Elasticsearch to ScyllaDB at ScyllaDB Summit 2017 <https://www.scylladb.com/2017/10/06/zenly-elasticsearch-scylla/>`_



diff --git a/docs/using-scylla/integrations/integration-iota.rst b/docs/using-scylla/integrations/integration-iota.rst
--- a/docs/using-scylla/integrations/integration-iota.rst
+++ b/docs/using-scylla/integrations/integration-iota.rst
@@ -1,9 +1,9 @@
-====================================
-Integrate Scylla with IOTA Chronicle
-====================================
+======================================
+Integrate ScyllaDB with IOTA Chronicle
+======================================

The IOTA protocol is a permissionless trust layer for the Internet of Things which enables a frictionless exchange of value between machines and humans.
-Anyone can secure data on the Tangle and make it verifiable to third-parties such as Scylla.
+Anyone can secure data on the Tangle and make it verifiable to third-parties such as ScyllaDB.


An example of such a third-party integration uses Chronicle as follows:
@@ -12,8 +12,8 @@ An example of such a third-party integration uses Chronicle as follows:
.. image:: images/iota.png
:width: 600pt

-The list below contains integration projects using Scylla with IOTA Chronicle.
-If you have tested your application with Scylla and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.
+The list below contains integration projects using ScyllaDB with IOTA Chronicle.
+If you have tested your application with ScyllaDB and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.



diff --git a/docs/using-scylla/integrations/integration-jaeger.rst b/docs/using-scylla/integrations/integration-jaeger.rst
--- a/docs/using-scylla/integrations/integration-jaeger.rst
+++ b/docs/using-scylla/integrations/integration-jaeger.rst
@@ -1,6 +1,6 @@
-===================================
-Integrate Scylla with Jaeger Server
-===================================
+=====================================
+Integrate ScyllaDB with Jaeger Server
+=====================================

`Jaeger Server <https://www.jaegertracing.io>`_ is an open-source distributed tracing system, originally developed by Uber Technologies,
aimed at monitoring and troubleshooting the performance of microservices-based applications.
diff --git a/docs/using-scylla/integrations/integration-janus.rst b/docs/using-scylla/integrations/integration-janus.rst
--- a/docs/using-scylla/integrations/integration-janus.rst
+++ b/docs/using-scylla/integrations/integration-janus.rst
@@ -1,6 +1,6 @@
-======================================================
-Integrate Scylla with the JanusGraph Graph Data System
-======================================================
+========================================================
+Integrate ScyllaDB with the JanusGraph Graph Data System
+========================================================

A graph data system (or graph database) is a database that uses a graph structure with nodes and edges to represent data. Edges represent relationships between nodes, and these relationships allow the data to be linked and for the graph to be visualized. It’s possible to use different storage mechanisms for the underlying data, and this choice affects the performance, scalability, ease of maintenance, and cost.

@@ -10,14 +10,14 @@ Some common use cases for graph databases are knowledge graphs, recommendation a

The data storage layer for JanusGraph is pluggable - you can choose from several storage systems, including ScyllaDB.

-In the ScyllaDB University lesson, `A Graph Data System Powered by ScyllaDB and JanusGraph - Part 1 <https://university.scylladb.com/courses/the-mutant-monitoring-system-training-course/lessons/a-graph-data-system-powered-by-scylladb-and-janusgraph/>`_ , you can learn more about using JanusGraph with ScyllaDB as the underlying data storage layer, and see a hands-on, step-by-step example. Another lesson, `ScyllaDB and JanusGraph - Part 2 <https://university.scylladb.com/courses/the-mutant-monitoring-system-training-course/lessons/a-graph-data-system-powered-by-scylladb-and-janusgraph-part-2/>`_ , covers the JanusGraph data model, how data is persisted using Scylla as a backend for JanusGraph, and an example of persistence in case of server failure.
+In the ScyllaDB University lesson, `A Graph Data System Powered by ScyllaDB and JanusGraph - Part 1 <https://university.scylladb.com/courses/the-mutant-monitoring-system-training-course/lessons/a-graph-data-system-powered-by-scylladb-and-janusgraph/>`_ , you can learn more about using JanusGraph with ScyllaDB as the underlying data storage layer, and see a hands-on, step-by-step example. Another lesson, `ScyllaDB and JanusGraph - Part 2 <https://university.scylladb.com/courses/the-mutant-monitoring-system-training-course/lessons/a-graph-data-system-powered-by-scylladb-and-janusgraph-part-2/>`_ , covers the JanusGraph data model, how data is persisted using ScyllaDB as a backend for JanusGraph, and an example of persistence in case of server failure.

-The list below contains integration projects using Scylla with JanusGraph. If you have tested your application with Scylla and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.
+The list below contains integration projects using ScyllaDB with JanusGraph. If you have tested your application with ScyllaDB and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.

-* `QOMPLX: Using Scylla with JanusGraph for Cybersecurity <https://www.scylladb.com/2021/03/11/qomplx-using-scylla-with-janusgraph-for-cybersecurity/>`_
+* `QOMPLX: Using ScyllaDB with JanusGraph for Cybersecurity <https://www.scylladb.com/2021/03/11/qomplx-using-scylla-with-janusgraph-for-cybersecurity/>`_

-* `Zeotap: A Graph of Twenty Billion IDs Built on Scylla and JanusGraph <https://www.scylladb.com/2020/05/14/zeotap-a-graph-of-twenty-billion-ids-built-on-scylla-and-janusgraph/>`_
+* `Zeotap: A Graph of Twenty Billion IDs Built on ScyllaDB and JanusGraph <https://www.scylladb.com/2020/05/14/zeotap-a-graph-of-twenty-billion-ids-built-on-scylla-and-janusgraph/>`_

-* `Powering a Graph Data System with Scylla + JanusGraph <https://www.scylladb.com/2019/05/14/powering-a-graph-data-system-with-scylla-janusgraph/>`_
+* `Powering a Graph Data System with ScyllaDB + JanusGraph <https://www.scylladb.com/2019/05/14/powering-a-graph-data-system-with-scylla-janusgraph/>`_

-* `Scylla Shines in IBM’s Performance Tests for JanusGraph <https://www.scylladb.com/users/case-study-scylla-shines-in-ibms-performance-tests-for-janusgraph/>`_
+* `ScyllaDB Shines in IBM’s Performance Tests for JanusGraph <https://www.scylladb.com/users/case-study-scylla-shines-in-ibms-performance-tests-for-janusgraph/>`_
diff --git a/docs/using-scylla/integrations/integration-k8.rst b/docs/using-scylla/integrations/integration-k8.rst
--- a/docs/using-scylla/integrations/integration-k8.rst
+++ b/docs/using-scylla/integrations/integration-k8.rst
@@ -1,22 +1,22 @@
-===================================
-Integrate Scylla with Kubernetes
-===================================
+==================================
+Integrate ScyllaDB with Kubernetes
+==================================


-The list below contains integration projects using Scylla with Kubernetes. If you have tested your application with Scylla and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.
+The list below contains integration projects using ScyllaDB with Kubernetes. If you have tested your application with ScyllaDB and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.

-* `Kubernetes and Scylla: 10 Questions and Answers <https://www.scylladb.com/2018/06/14/webinar-questions-kubernetes/>`_
-* `Exploring Scylla on Kubernetes <https://www.scylladb.com/2018/03/29/scylla-kubernetes-overview/>`_
-* `Kubernetes Operator lesson <https://university.scylladb.com/courses/scylla-operations/lessons/kubernetes-operator/>`_ on Scylla University
+* `Kubernetes and ScyllaDB: 10 Questions and Answers <https://www.scylladb.com/2018/06/14/webinar-questions-kubernetes/>`_
+* `Exploring ScyllaDB on Kubernetes <https://www.scylladb.com/2018/03/29/scylla-kubernetes-overview/>`_
+* `Kubernetes Operator lesson <https://university.scylladb.com/courses/scylla-operations/lessons/kubernetes-operator/>`_ on ScyllaDB University


-Scylla Operator
-===============
+ScyllaDB Operator
+=================

-`Scylla Operator <https://github.com/scylladb/scylla-operator>`_ is a Kubernetes Operator for managing and automating tasks related to managing Scylla clusters.
+`ScyllaDB Operator <https://github.com/scylladb/scylla-operator>`_ is a Kubernetes Operator for managing and automating tasks related to managing ScyllaDB clusters.

For more information see:

-* `Scylla Operator README <https://github.com/scylladb/scylla-operator/blob/master/README.md>`_
+* `ScyllaDB Operator README <https://github.com/scylladb/scylla-operator/blob/master/README.md>`_

-* `Scylla Operator Documentation <https://operator.docs.scylladb.com/stable/>`_
+* `ScyllaDB Operator Documentation <https://operator.docs.scylladb.com/stable/>`_
diff --git a/docs/using-scylla/integrations/integration-kafka.rst b/docs/using-scylla/integrations/integration-kafka.rst
--- a/docs/using-scylla/integrations/integration-kafka.rst
+++ b/docs/using-scylla/integrations/integration-kafka.rst
@@ -1,6 +1,6 @@
-==============================
-Integrate Scylla with Kafka
-==============================
+=============================
+Integrate ScyllaDB with Kafka
+=============================

.. toctree::
:hidden:
@@ -9,25 +9,25 @@ Integrate Scylla with Kafka
scylla-cdc-source-connector

Apache Kafka is capable of delivering reliable, scalable, high-throughput data streams across a myriad of data sources and sinks.
-A great number of open source users and enterprise customers use Scylla and Kafka together.
-You can use Scylla and Apache Kafka in integration solutions, such as creating a scalable backend for an IoT service.
-If you have tested your application with Scylla and Kafka and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.
+A great number of open source users and enterprise customers use ScyllaDB and Kafka together.
+You can use ScyllaDB and Apache Kafka in integration solutions, such as creating a scalable backend for an IoT service.
+If you have tested your application with ScyllaDB and Kafka and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.


Additional Information
----------------------
**Documentation**

-* :doc:`Shard-Aware Kafka Connector for Scylla <sink-kafka-connector>`
-* :doc:`Scylla CDC Source Connector <scylla-cdc-source-connector>`
+* :doc:`Shard-Aware Kafka Connector for ScyllaDB <sink-kafka-connector>`
+* :doc:`ScyllaDB CDC Source Connector <scylla-cdc-source-connector>`

**Blog Posts**

-* `Introducing the Kafka Scylla Connector <https://www.scylladb.com/2020/02/18/introducing-the-kafka-scylla-connector/>`_
+* `Introducing the Kafka ScyllaDB Connector <https://www.scylladb.com/2020/02/18/introducing-the-kafka-scylla-connector/>`_

**Presentations**

-* `Streaming Data from Scylla to Kafka <https://www.scylladb.com/presentations/streaming-data-from-scylla-to-kafka/>`_
+* `Streaming Data from ScyllaDB to Kafka <https://www.scylladb.com/presentations/streaming-data-from-scylla-to-kafka/>`_

**Git Projects**

diff --git a/docs/using-scylla/integrations/integration-kairos.rst b/docs/using-scylla/integrations/integration-kairos.rst
--- a/docs/using-scylla/integrations/integration-kairos.rst
+++ b/docs/using-scylla/integrations/integration-kairos.rst
@@ -1,30 +1,30 @@


-==============================
-Integrate Scylla with KairosDB
-==============================
+================================
+Integrate ScyllaDB with KairosDB
+================================

About KairosDB
==============

-KairosDB is a fast distributed scalable time-series database. It was initially a rewrite of the original OpenTSDB project, but it evolved into a different system where data management, data processing, and visualization are fully separated. When KairosDB introduced native CQL support in version 1.2.0, we created a performance test for KairosDB and Scylla. Through this process, we discovered how easily both platforms could be integrated with each other. The results are presented here in an example that you can adapt to suit your needs. More information on KairosDB can be found on the KairosDB `website <https://kairosdb.github.io/>`_.
+KairosDB is a fast distributed scalable time-series database. It was initially a rewrite of the original OpenTSDB project, but it evolved into a different system where data management, data processing, and visualization are fully separated. When KairosDB introduced native CQL support in version 1.2.0, we created a performance test for KairosDB and ScyllaDB. Through this process, we discovered how easily both platforms could be integrated with each other. The results are presented here in an example that you can adapt to suit your needs. More information on KairosDB can be found on the KairosDB `website <https://kairosdb.github.io/>`_.

-Benefits of integrating KairosDB with Scylla
---------------------------------------------
+Benefits of integrating KairosDB with ScyllaDB
+----------------------------------------------

-A highly available time-series solution requires an efficient, tailored frontend framework and a backend database with a fast ingestion rate. KairosDB provides a simple and reliable way to ingest and retrieve sensors’ information or metrics, while Scylla provides a highly reliable, performant, and highly available backend that scales indefinitely and can store large quantities of time-series data.
+A highly available time-series solution requires an efficient, tailored frontend framework and a backend database with a fast ingestion rate. KairosDB provides a simple and reliable way to ingest and retrieve sensors’ information or metrics, while ScyllaDB provides a highly reliable, performant, and highly available backend that scales indefinitely and can store large quantities of time-series data.

Use case for integration
------------------------
-The diagram below shows a typical integration scenario where several sensors (in this case, GPU temperature sensors) are sending data to KairosDB node(s). The KairosDB nodes are using a Scylla cluster as a backend datastore. To interact with KairosDB, there is a web based UI.
+The diagram below shows a typical integration scenario where several sensors (in this case, GPU temperature sensors) are sending data to KairosDB node(s). The KairosDB nodes are using a ScyllaDB cluster as a backend datastore. To interact with KairosDB, there is a web based UI.

.. image:: images/kairos-arch.png
:align: left
:alt: scylla and kairos solution

**Legend**

-1. Scylla cluster
+1. ScyllaDB cluster
2. KairosDB nodes
3. GPU sensors
4. WebUI for KairosDB
@@ -36,7 +36,7 @@ Recommendations
---------------
In order to implement this integration example, the following are recommendations:

-* It is recommended to deploy KairosDB separately from Scylla, to prevent the databases from competing for resources.
+* It is recommended to deploy KairosDB separately from ScyllaDB, to prevent the databases from competing for resources.
* Make sure to have sufficient disk space, as KairosDB accumulates data files queued on disk.
* KairosDB requires Java (and JAVA_HOME setting) as per the procedure `here <https://www.digitalocean.com/community/tutorials/how-to-install-java-with-apt-get-on-ubuntu-16-04>`_.

@@ -45,12 +45,12 @@ Resource list
-------------
Although your requirements may be different, this example uses the following resources:

-* Scylla cluster: 3 x i3.8XL instances
+* ScyllaDB cluster: 3 x i3.8XL instances
* KairosDB node: m5.2XL instance(s)
* Loaders (python script emulating the sensors): m5.2XL instance(s)
* Disk space 200GB for the KairosDB nodes

-Note that in this case, 200GB was sufficient, but your disk space depends on the workload size from the application/s into Kairos and the speed in which KairosDB can handle the load and write it to the Scylla backend datastore.
+Note that in this case, 200GB was sufficient, but your disk space depends on the workload size from the application/s into Kairos and the speed in which KairosDB can handle the load and write it to the ScyllaDB backend datastore.

Integration instructions
------------------------
@@ -59,7 +59,7 @@ The commands shown in this procedure may require root user or sudo.

**Before you begin**

-Verify that you have installed Scylla on a different instance/server and that you know the Scylla server IP address.
+Verify that you have installed ScyllaDB on a different instance/server and that you know the ScyllaDB server IP address.

**Procedure**

@@ -75,7 +75,7 @@ Verify that you have installed Scylla on a different instance/server and that yo

sudo tar xvzf kairosdb-1.2.0-1.tar.gz

-3. Configure KairosDB to connect to the Scylla server.
+3. Configure KairosDB to connect to the ScyllaDB server.
Using an editor, open the ``kairosdb/conf/kairosdb.properties`` file and make the following edits:

* Comment out the H2 module
@@ -90,13 +90,13 @@ Verify that you have installed Scylla on a different instance/server and that yo

kairosdb.service.datastore=org.kairosdb.datastore.cassandra.CassandraModule

- * In the ``#Cassandra properties`` section, set the Scylla nodes IP
+ * In the ``#Cassandra properties`` section, set the ScyllaDB nodes IP

.. code-block:: none

kairosdb.datastore.cassandra.cql_host_list=[IP1],[IP2]...

- * Set the :doc:`replication </architecture/architecture-fault-tolerance>` factor (for production purposes use a Scylla cluster with a minimum of RF=3)
+ * Set the :doc:`replication </architecture/architecture-fault-tolerance>` factor (for production purposes use a ScyllaDB cluster with a minimum of RF=3)


.. code-block:: none
@@ -110,20 +110,20 @@ Verify that you have installed Scylla on a different instance/server and that yo
kairosdb.datastore.cassandra.read_consistency_level=QUORUM
kairosdb.datastore.cassandra.write_consistency_level=ONE (sufficient for time series workload)

- * In case your Scylla / Cassandra cluster is deployed on multiple data centers, change the local datacenter parameter to match the data center you are using.
+ * In case your ScyllaDB / Cassandra cluster is deployed on multiple data centers, change the local datacenter parameter to match the data center you are using.

.. code-block:: none

kairosdb.datastore.cassandra.local_datacenter=[your_local_DC_name]

- * Set connections per host to match the # of shards that Scylla utilizes. Check the number of shards by running the following command on your scylla nodes:
+ * Set connections per host to match the # of shards that ScyllaDB utilizes. Check the number of shards by running the following command on your scylla nodes:

.. code-block:: none

> cat /etc/scylla.d/cpuset.conf
CPUSET="--cpuset 1-15,17-31"

- In this case, Scylla is using 30 CPU threads (out of 32) as 1 physical core is dedicated to interrupts handling. Set the following Kairos connections:
+ In this case, ScyllaDB is using 30 CPU threads (out of 32) as 1 physical core is dedicated to interrupts handling. Set the following Kairos connections:

.. code-block:: none

@@ -142,7 +142,7 @@ Verify that you have installed Scylla on a different instance/server and that yo


* Set the Kairos batch size (default = 200) and the minimum batch size (default = 100).
- Testing found that it is necessary to use a smaller value than the default setting. This was because one of Scylla's shard handling batches can spike to 100% CPU when handling a heavy load from Kairos, which leads to write timeout and poor latency results. In the example, we found the best performance when it is set to 50. When we deployed three Kairos nodes, we divided the load so that each node was set to 15.
+ Testing found that it is necessary to use a smaller value than the default setting. This was because one of ScyllaDB's shard handling batches can spike to 100% CPU when handling a heavy load from Kairos, which leads to write timeout and poor latency results. In the example, we found the best performance when it is set to 50. When we deployed three Kairos nodes, we divided the load so that each node was set to 15.

.. code-block:: none

@@ -171,7 +171,7 @@ Verify that you have installed Scylla on a different instance/server and that yo
kairosdb.datastore.cassandra.force_default_datapoint_ttl=false

4. Using multiple Kairos instances (optional).
- You might need to use more than a single KairosDB instance to push more data into Scylla, as there are some limits in the Cassandra client that prevents a single kairos instance from pushing faster. To deploy multiple Kairos nodes, shard the clients / sensors, and assign several ingesting clients per Kairos node. Note that in this case, the data is not divided, but each Kairos node is assigned to several clients.
+ You might need to use more than a single KairosDB instance to push more data into ScyllaDB, as there are some limits in the Cassandra client that prevents a single kairos instance from pushing faster. To deploy multiple Kairos nodes, shard the clients / sensors, and assign several ingesting clients per Kairos node. Note that in this case, the data is not divided, but each Kairos node is assigned to several clients.

5. Start KairosDB process.
Change to the bin directory and start KairosDB using one of the following commands:
@@ -194,7 +194,7 @@ Verify that you have installed Scylla on a different instance/server and that yo

> sudo ./kairosdb.sh stop

-6. To verify that the KairosDB Schema was created properly in your Scylla cluster, connect to one of the Scylla cluster nodes and open cql shell:
+6. To verify that the KairosDB Schema was created properly in your ScyllaDB cluster, connect to one of the ScyllaDB cluster nodes and open cql shell:

.. code-block:: none

@@ -219,17 +219,17 @@ Verify that you have installed Scylla on a different instance/server and that yo
Ansible playbook
================

-A KairosDB deployment Ansible playbook for your use is available `on github <https://github.com/scylladb/scylla-code-samples/tree/master/deploy_kairosdb>`_. It requires that you `install <https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-ubuntu-16-04>`_ Ansible v2.3 or higher and that a Scylla cluster up and running.
+A KairosDB deployment Ansible playbook for your use is available `on github <https://github.com/scylladb/scylla-code-samples/tree/master/deploy_kairosdb>`_. It requires that you `install <https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-ubuntu-16-04>`_ Ansible v2.3 or higher and that a ScyllaDB cluster up and running.

Setup Ansible playbook
----------------------
**Procedure**

1. Set the following variables in kairosdb_deploy.yml file:

- * Scylla node(s) IP address(es)
- * Number of shards per node that Scylla utilizes (cat /etc/scylla.d/cpuset.conf)
- * KairosDB batch size - when using a single KairosDB instance with Scylla, while Scylla runs on i3.8XL instance, the value should be set to '50'. When using multiple KairosDB nodes, or when Scylla runs on smaller instances, the value should be lower. If you are using multiple KairosDB nodes, you need to divide the batch size evenly per node.
+ * ScyllaDB node(s) IP address(es)
+ * Number of shards per node that ScyllaDB utilizes (cat /etc/scylla.d/cpuset.conf)
+ * KairosDB batch size - when using a single KairosDB instance with ScyllaDB, while ScyllaDB runs on i3.8XL instance, the value should be set to '50'. When using multiple KairosDB nodes, or when ScyllaDB runs on smaller instances, the value should be lower. If you are using multiple KairosDB nodes, you need to divide the batch size evenly per node.
2. Run the playbook:

* Run locally: add ``‘localhost ansible_connection=local’`` to the ``/etc/ansible/hosts`` file
diff --git a/docs/using-scylla/integrations/integration-mindsdb.rst b/docs/using-scylla/integrations/integration-mindsdb.rst
--- a/docs/using-scylla/integrations/integration-mindsdb.rst
+++ b/docs/using-scylla/integrations/integration-mindsdb.rst
@@ -1,6 +1,6 @@
-===================================
-Integrate Scylla with MindsDB
-===================================
+===============================
+Integrate ScyllaDB with MindsDB
+===============================

`MindsDB <https://github.com/mindsdb/mindsdb>`_ is a machine learning platform to help developers build AI-powered solutions. It helps automate and integrate machine learning frameworks (including `GPT-4 <https://en.wikipedia.org/wiki/GPT-4>`_) into the data stack as "AI Tables" to streamline the integration of AI into applications, making it accessible to developers of all skill levels. "AI tables" enable continuous learning from existing data.

diff --git a/docs/using-scylla/integrations/integration-spark.rst b/docs/using-scylla/integrations/integration-spark.rst
--- a/docs/using-scylla/integrations/integration-spark.rst
+++ b/docs/using-scylla/integrations/integration-spark.rst
@@ -1,25 +1,25 @@
-===========================
-Integrate Scylla with Spark
-===========================
+=============================
+Integrate ScyllaDB with Spark
+=============================


-Scylla is Apache Cassandra compatible at the CQL binary protocol level, and any driver which uses CQL will work with Scylla (more :doc:`here </using-scylla/drivers/index>`). Any application which uses a CQL driver will work with Scylla.
+ScyllaDB is Apache Cassandra compatible at the CQL binary protocol level, and any driver which uses CQL will work with ScyllaDB (more :doc:`here </using-scylla/drivers/index>`). Any application which uses a CQL driver will work with ScyllaDB.

-The list below contains integration projects using Scylla with Spark. If you have tested your application with Scylla and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.
+The list below contains integration projects using ScyllaDB with Spark. If you have tested your application with ScyllaDB and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.



-* `Hooking up Spark and Scylla (multi-part blog - part 1) <https://www.scylladb.com/2018/07/31/spark-scylla/>`_
+* `Hooking up Spark and ScyllaDB (multi-part blog - part 1) <https://www.scylladb.com/2018/07/31/spark-scylla/>`_

-* `Hooking up Spark and Scylla (multi-part blog - part 2) <https://www.scylladb.com/2018/08/21/spark-scylla-2/>`_
+* `Hooking up Spark and ScyllaDB (multi-part blog - part 2) <https://www.scylladb.com/2018/08/21/spark-scylla-2/>`_

-* `Hooking up Spark and Scylla (multi-part blog - part 3) <https://www.scylladb.com/2018/10/08/hooking-up-spark-and-scylla-part-3/>`_
+* `Hooking up Spark and ScyllaDB (multi-part blog - part 3) <https://www.scylladb.com/2018/10/08/hooking-up-spark-and-scylla-part-3/>`_

-* `Hooking up Spark and Scylla (multi-part blog - part 4) <https://www.scylladb.com/2018/11/13/hooking-up-spark-and-scylladb-part-4/>`_
+* `Hooking up Spark and ScyllaDB (multi-part blog - part 4) <https://www.scylladb.com/2018/11/13/hooking-up-spark-and-scylladb-part-4/>`_

* :doc:`Integration with Spark (KB article) </kb/scylla-and-spark-integration>`

-* `Analyzing flight delays with Scylla on top of Spark (blog entry) <https://www.scylladb.com/2017/05/02/analyzing-flight-delays-scylla-spark-2/>`_
+* `Analyzing flight delays with ScyllaDB on top of Spark (blog entry) <https://www.scylladb.com/2017/05/02/analyzing-flight-delays-scylla-spark-2/>`_

* `Using Spark with ScyllaDB lesson <https://university.scylladb.com/courses/the-mutant-monitoring-system-training-course/lessons/using-spark-with-scylla/>`_ on ScyllaDB University

diff --git a/docs/using-scylla/integrations/integration-spring.rst b/docs/using-scylla/integrations/integration-spring.rst
--- a/docs/using-scylla/integrations/integration-spring.rst
+++ b/docs/using-scylla/integrations/integration-spring.rst
@@ -1,10 +1,10 @@
==============================
-Integrate Scylla with Spring
+Integrate ScyllaDB with Spring
==============================

`Spring <https://spring.io>`_ is a java framework for creating easy to use web services.
`Spring Boot <https://spring.io/projects/spring-boot>`_ makes it easy to create stand-alone, production-grade Spring based Applications that you can "just run".
-If you have tested your application with Scylla and Spring and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.
+If you have tested your application with ScyllaDB and Spring and want to publish the results, contact us using the `community forum <https://forum.scylladb.com>`_.


Additional Information
diff --git a/docs/using-scylla/integrations/kafka-connector.rst b/docs/using-scylla/integrations/kafka-connector.rst
--- a/docs/using-scylla/integrations/kafka-connector.rst
+++ b/docs/using-scylla/integrations/kafka-connector.rst
@@ -16,7 +16,7 @@ This quickstart will show how to setup the ScyllaDB Sink Connector against a Doc

Preliminary setup
-----------------
-#. Using `Docker <https://hub.docker.com/r/scylladb/scylla/>`_, follow the instructions to launch Scylla.
+#. Using `Docker <https://hub.docker.com/r/scylladb/scylla/>`_, follow the instructions to launch ScyllaDB.
#. Start the Docker container, replacing the ``--name`` and ``--host name`` parameters with your own information. For example:

.. code-block:: none
@@ -70,7 +70,7 @@ Install Kafka Connector manually
Add Sink Connector plugin
-------------------------

-The Scylla sink connector is used to publish records from a Kafka topic into Scylla.
+The ScyllaDB sink connector is used to publish records from a Kafka topic into ScyllaDB.
Adding a new connector plugin requires restarting Connect.
Use the Confluent CLI to restart Connect.

@@ -149,7 +149,7 @@ Connector configuration
{"name":"firstName","type":"string"},{"name":"lastName","type":"string"}]}'
{"id":1}${"id":1,"firstName":"first","lastName":"last"}

-#. Test Scylla by running a select cql query:
+#. Test ScyllaDB by running a select cql query:

.. code-block:: none

@@ -158,8 +158,8 @@ Connector configuration
----+-----------+----------
1 | first | last

-Scylla modes
-------------
+ScyllaDB modes
+--------------

There are two modes, Standalone and Distributed.

@@ -255,7 +255,7 @@ Run the select cql query to view the data:
Authentication
--------------

-This example connects to a Scylla instance with security enabled and username / password authentication.
+This example connects to a ScyllaDB instance with security enabled and username / password authentication.

Select one of the following configuration methods based on how you have deployed ``|kconnect-long|``. Distributed Mode will the JSON / REST examples. The standalone mode will use the properties based example.

@@ -302,7 +302,7 @@ To check logs for the Confluent Platform use:

confluent local log <service> -- [<argument>] --path <path-to-confluent>

-To check logs for Scylla:
+To check logs for ScyllaDB:

.. code-block:: none

diff --git a/docs/using-scylla/integrations/scylla-cdc-source-connector-quickstart.rst b/docs/using-scylla/integrations/scylla-cdc-source-connector-quickstart.rst
--- a/docs/using-scylla/integrations/scylla-cdc-source-connector-quickstart.rst
+++ b/docs/using-scylla/integrations/scylla-cdc-source-connector-quickstart.rst
@@ -1,27 +1,27 @@
-==============================================
-Scylla CDC Source Connector Quickstart
-==============================================
+========================================
+ScyllaDB CDC Source Connector Quickstart
+========================================


Synopsis
--------

-This quickstart will show you how to setup the Scylla CDC Source Connector to replicate changes made in
-a Scylla table using :doc:`Scylla CDC <../cdc/cdc-intro>`.
+This quickstart will show you how to setup the ScyllaDB CDC Source Connector to replicate changes made in
+a ScyllaDB table using :doc:`ScyllaDB CDC <../cdc/cdc-intro>`.

-Scylla setup
-------------
+ScyllaDB setup
+--------------

-First, let's setup a Scylla cluster and create a CDC-enabled table.
+First, let's setup a ScyllaDB cluster and create a CDC-enabled table.

-Scylla installation
-^^^^^^^^^^^^^^^^^^^
+ScyllaDB installation
+^^^^^^^^^^^^^^^^^^^^^

-For the purpose of this quickstart, we will configure a Scylla instance using Docker. You can skip this
-section if you have already installed Scylla. To learn more about installing Scylla in production
-environments, please refer to the :doc:`Install Scylla page </getting-started/install-scylla/index>`.
+For the purpose of this quickstart, we will configure a ScyllaDB instance using Docker. You can skip this
+section if you have already installed ScyllaDB. To learn more about installing ScyllaDB in production
+environments, please refer to the :doc:`Install ScyllaDB page </getting-started/install-scylla/index>`.

-#. Using `Docker <https://hub.docker.com/r/scylladb/scylla/>`_, follow the instructions to launch Scylla.
+#. Using `Docker <https://hub.docker.com/r/scylladb/scylla/>`_, follow the instructions to launch ScyllaDB.
#. Start the Docker container, replacing the ``--name`` and ``--host name`` parameters with your own information. For example:

.. code-block:: bash
@@ -43,7 +43,7 @@ environments, please refer to the :doc:`Install Scylla page </getting-started/in
Creating a CDC-enabled table
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-Let's connect to your Scylla cluster and create a new CDC-enabled table. We will create an example table by
+Let's connect to your ScyllaDB cluster and create a new CDC-enabled table. We will create an example table by
issuing the following CQL query and insert some example data:

.. code-block:: cql
@@ -66,12 +66,12 @@ If you already have a table you wish to use, but it does not have CDC enabled, y

ALTER TABLE keyspace.table_name with cdc = {'enabled': true};

-To learn more about Scylla CDC, visit :doc:`Change Data Capture (CDC) page <../cdc/index>`.
+To learn more about ScyllaDB CDC, visit :doc:`Change Data Capture (CDC) page <../cdc/index>`.

Kafka setup
-----------

-Scylla CDC Source Connector works well with both `open-source Kafka <https://kafka.apache.org/>`_
+ScyllaDB CDC Source Connector works well with both `open-source Kafka <https://kafka.apache.org/>`_
and `Confluent Platform <https://www.confluent.io/>`_. In this quickstart we will show how
to install the Confluent Platform and deploy the connector (applicable to both open-source Kafka
and Confluent Platform).
@@ -87,10 +87,10 @@ If you are new to Confluent, `download Confluent Platform <https://www.confluent
#. You will receive an email with instructions. Download / move the file to the desired location
#. Continue with the setup following `this document <https://docs.confluent.io/current/quickstart/ce-quickstart.html#ce-quickstart>`_

-Installing Scylla CDC Source Connector
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Installing ScyllaDB CDC Source Connector
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-#. Download or build Scylla CDC Source Connector using `the project build instructions <https://github.com/scylladb/scylla-cdc-source-connector#building>`_
+#. Download or build ScyllaDB CDC Source Connector using `the project build instructions <https://github.com/scylladb/scylla-cdc-source-connector#building>`_

#. Deploy the connector:

@@ -101,13 +101,13 @@ Installing Scylla CDC Source Connector
Connector configuration
-----------------------

-After you have successfully configured Scylla and Kafka, the next step is to configure the connector
+After you have successfully configured ScyllaDB and Kafka, the next step is to configure the connector
and start it up.

Configuration using Confluent Control Center
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-If you use Confluent Platform, the easiest way to configure and start up the Scylla CDC Source Connector
+If you use Confluent Platform, the easiest way to configure and start up the ScyllaDB CDC Source Connector
is to use Confluent Control Center web interface.

#. Open the Confluent Control Center. By default, it is started at port ``9021``:
@@ -145,15 +145,15 @@ is to use Confluent Control Center web interface.
#. Name: the name of this configuration
#. Key converter class, value converter class: converters that determine the format
of produced messages. You can read more about them at `Kafka Connect Deep Dive – Converters and Serialization Explained <https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/>`_
- #. Hosts: contact points of Scylla
- #. Namespace: a unique name that identifies the Scylla cluster and that is used as a prefix for all schemas, topics.
+ #. Hosts: contact points of ScyllaDB
+ #. Namespace: a unique name that identifies the ScyllaDB cluster and that is used as a prefix for all schemas, topics.
#. Table names: the names of CDC-enabled tables you want to replicate

For the quickstart example here are the values we will use:

#. Name: ``QuickstartConnector``
#. Key converter class, value converter class: ``org.apache.kafka.connect.json.JsonConverter``
- #. Hosts: ``172.17.0.2:9042`` (Scylla started in Docker)
+ #. Hosts: ``172.17.0.2:9042`` (ScyllaDB started in Docker)
#. Namespace: ``QuickstartConnectorNamespace``
#. Table names: ``quickstart_keyspace.orders``

@@ -197,4 +197,4 @@ Configuration using open-source Kafka
Additional information
----------------------

-* `Scylla CDC Source Connector GitHub project <https://github.com/scylladb/scylla-cdc-source-connector>`_
+* `ScyllaDB CDC Source Connector GitHub project <https://github.com/scylladb/scylla-cdc-source-connector>`_
diff --git a/docs/using-scylla/integrations/scylla-cdc-source-connector.rst b/docs/using-scylla/integrations/scylla-cdc-source-connector.rst
--- a/docs/using-scylla/integrations/scylla-cdc-source-connector.rst
+++ b/docs/using-scylla/integrations/scylla-cdc-source-connector.rst
@@ -1,28 +1,28 @@
-==========================================
-Scylla CDC Source Connector
-==========================================
+=============================
+ScyllaDB CDC Source Connector
+=============================

.. toctree::
:hidden:

scylla-cdc-source-connector-quickstart

-Scylla CDC Source Connector is a source connector capturing row-level changes in the tables of a Scylla cluster. It is a Debezium connector, compatible with Kafka Connect (with Kafka 2.6.0+) and built on top of scylla-cdc-java library. The source code of the connector is available at `GitHub <https://github.com/scylladb/scylla-cdc-source-connector>`_.
+ScyllaDB CDC Source Connector is a source connector capturing row-level changes in the tables of a ScyllaDB cluster. It is a Debezium connector, compatible with Kafka Connect (with Kafka 2.6.0+) and built on top of scylla-cdc-java library. The source code of the connector is available at `GitHub <https://github.com/scylladb/scylla-cdc-source-connector>`_.

-The connector reads the CDC log for specified tables and produces Kafka messages for each row-level ``INSERT``, ``UPDATE`` or ``DELETE`` operation. The connector is able to split reading the CDC log across multiple processes: the connector can start a separate Kafka Connect task for reading each :doc:`Vnode of Scylla cluster </architecture/ringarchitecture/index>` allowing for high throughput. You can limit the number of started tasks by using ``tasks.max`` property.
+The connector reads the CDC log for specified tables and produces Kafka messages for each row-level ``INSERT``, ``UPDATE`` or ``DELETE`` operation. The connector is able to split reading the CDC log across multiple processes: the connector can start a separate Kafka Connect task for reading each :doc:`Vnode of ScyllaDB cluster </architecture/ringarchitecture/index>` allowing for high throughput. You can limit the number of started tasks by using ``tasks.max`` property.

-Scylla CDC Source Connector seamlessly handles schema changes and topology changes (adding, removing nodes from Scylla cluster). The connector is fault-tolerant, retrying reading data from Scylla in case of failure. It periodically saves the current position in Scylla CDC log using Kafka Connect offset tracking (configurable by ``offset.flush.interval.ms`` parameter). If the connector is stopped, it is able to resume reading from previously saved offset. Scylla CDC Source Connector has at-least-once semantics.
+ScyllaDB CDC Source Connector seamlessly handles schema changes and topology changes (adding, removing nodes from ScyllaDB cluster). The connector is fault-tolerant, retrying reading data from ScyllaDB in case of failure. It periodically saves the current position in ScyllaDB CDC log using Kafka Connect offset tracking (configurable by ``offset.flush.interval.ms`` parameter). If the connector is stopped, it is able to resume reading from previously saved offset. ScyllaDB CDC Source Connector has at-least-once semantics.

The connector has the following capabilities:

* Kafka Connect connector using Debezium framework
-* Replication of row-level changes from Scylla using :doc:`Scylla CDC <../cdc/cdc-intro>`. The connector replicates the following operations: ``INSERT``, ``UPDATE``, ``DELETE`` (single row deletes)
+* Replication of row-level changes from ScyllaDB using :doc:`ScyllaDB CDC <../cdc/cdc-intro>`. The connector replicates the following operations: ``INSERT``, ``UPDATE``, ``DELETE`` (single row deletes)
* High scalability - able to split work across multiple Kafka Connect workers
* Fault tolerant - connector periodically saves its progress and can resume from previously saved offset (with at-least-once semantics)
* Support for many standard Kafka Connect converters, such as JSON and Avro
* Compatible with standard Kafka Connect transformations
* Metadata about CDC events - each generated Kafka message contains information about source, such as timestamp and table name
-* Seamless handling of schema changes and topology changes (adding, removing nodes from Scylla cluster)
+* Seamless handling of schema changes and topology changes (adding, removing nodes from ScyllaDB cluster)

The connector has the following limitations:

@@ -31,6 +31,6 @@ The connector has the following limitations:
* No support for collection types (``LIST``, ``SET``, ``MAP``) and ``UDT`` - columns with those types are omitted from generated messages
* Preimage and postimage - changes only contain those columns that were modified, not the entire row before/after change

-The following documents will help you get started with Scylla CDC Source Connector:
+The following documents will help you get started with ScyllaDB CDC Source Connector:

-* :doc:`Scylla CDC Source Connector Quickstart <scylla-cdc-source-connector-quickstart>`
\ No newline at end of file
+* :doc:`ScyllaDB CDC Source Connector Quickstart <scylla-cdc-source-connector-quickstart>`
\ No newline at end of file
diff --git a/docs/using-scylla/integrations/sink-config.rst b/docs/using-scylla/integrations/sink-config.rst
--- a/docs/using-scylla/integrations/sink-config.rst
+++ b/docs/using-scylla/integrations/sink-config.rst
@@ -4,9 +4,9 @@ Kafka Sink Connector Configuration

**Topic: Kafka Sink Connector configuration properties**

-**Learn: How to configure the Scylla Kafka Sink Connector**
+**Learn: How to configure the ScyllaDB Kafka Sink Connector**

-**Audience: Scylla application developers**
+**Audience: ScyllaDB application developers**


Synopsis
@@ -31,8 +31,8 @@ Connection
scylladb.contact.points
^^^^^^^^^^^^^^^^^^^^^^^

-Specifies which Scylla hosts to connect to.
-Scylla nodes use this list of hosts to find each other and learn the topology of the ring.
+Specifies which ScyllaDB hosts to connect to.
+ScyllaDB nodes use this list of hosts to find each other and learn the topology of the ring.
You must change this if you are running multiple nodes.
It's essential to put at least two hosts in case of bigger clusters for high availability purposes.
If you are using a docker image, connect to the host it uses.
@@ -44,7 +44,7 @@ If you are using a docker image, connect to the host it uses.
scylladb.port
^^^^^^^^^^^^^

-Specifies the port that the Scylla hosts are listening on.
+Specifies the port that the ScyllaDB hosts are listening on.
For example, when using a docker image, connect to the port it uses (use ``docker ps``).

* Type: Int
@@ -64,7 +64,7 @@ Specifies the local Data Center name (case-sensitive) that is local to the mach
scylladb.security.enabled
^^^^^^^^^^^^^^^^^^^^^^^^^

-Enables security while loading the sink connector and connecting to Scylla.
+Enables security while loading the sink connector and connecting to ScyllaDB.

* Type: Boolean
* Importance: High
@@ -73,7 +73,7 @@ Enables security while loading the sink connector and connecting to Scylla.
scylladb.username
^^^^^^^^^^^^^^^^^

-Specifies the username to use to connect to Scylla. Set ``scylladb.security.enable = true`` when using this parameter.
+Specifies the username to use to connect to ScyllaDB. Set ``scylladb.security.enable = true`` when using this parameter.

* Type: String
* Importance: High
@@ -82,7 +82,7 @@ Specifies the username to use to connect to Scylla. Set ``scylladb.security.enab
scylladb.password
^^^^^^^^^^^^^^^^^

-Specifies the password to use to connect to Scylla. Set ``scylladb.security.enable = true`` when using this parameter.
+Specifies the password to use to connect to ScyllaDB. Set ``scylladb.security.enable = true`` when using this parameter.

* Type: Password
* Importance: High
@@ -91,7 +91,7 @@ Specifies the password to use to connect to Scylla. Set ``scylladb.security.enab
scylladb.compression
^^^^^^^^^^^^^^^^^^^^

-Specifies the compression algorithm to use when connecting to Scylla.
+Specifies the compression algorithm to use when connecting to ScyllaDB.

* Type: string
* Default: NONE
@@ -101,7 +101,7 @@ Specifies the compression algorithm to use when connecting to Scylla.
scylladb.ssl.enabled
^^^^^^^^^^^^^^^^^^^^

-Specifies if SSL should be enabled when connecting to Scylla.
+Specifies if SSL should be enabled when connecting to ScyllaDB.

* Type: boolean
* Default: false
@@ -132,7 +132,7 @@ Specifies the password to use to access the Java Truststore.
scylladb.ssl.provider
^^^^^^^^^^^^^^^^^^^^^

-Specifies the SSL Provider to use when connecting to Scylla.
+Specifies the SSL Provider to use when connecting to ScyllaDB.

* Type: string
* Default: JDK
@@ -145,7 +145,7 @@ Keyspace
scylladb.keyspace
^^^^^^^^^^^^^^^^^

-Specifies the keyspace to write to. This keyspace is like a database in the Scylla cluster.
+Specifies the keyspace to write to. This keyspace is like a database in the ScyllaDB cluster.

* Type: String
* Importance: High
@@ -198,8 +198,8 @@ Specifies the compression algorithm to use when the table is created.
scylladb.offset.storage.table
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-The table within the Scylla keyspace to store the offsets that have been read from Apache Kafka.
-This is used once to enable delivery to Scylla.
+The table within the ScyllaDB keyspace to store the offsets that have been read from Apache Kafka.
+This is used once to enable delivery to ScyllaDB.

* Type: String
* Importance: Low
@@ -211,7 +211,7 @@ Write
scylladb.consistency.level
^^^^^^^^^^^^^^^^^^^^^^^^^^

-The requested consistency level to use when writing to Scylla.
+The requested consistency level to use when writing to ScyllaDB.
The Consistency Level (CL) determines how many replicas in a cluster that must acknowledge read or write operations before it is considered successful.

* Type: String
@@ -223,7 +223,7 @@ scylladb.deletes.enabled
^^^^^^^^^^^^^^^^^^^^^^^^

Determines if the connector should process deletes.
-The Kafka records with a Kafka record value as null will result in the deletion of the Scylla record with the primary key present in the Kafka record key.
+The Kafka records with a Kafka record value as null will result in the deletion of the ScyllaDB record with the primary key present in the Kafka record key.

* Type: boolean
* Default: true
@@ -232,7 +232,7 @@ The Kafka records with a Kafka record value as null will result in the deletion
scylladb.execute.timeout.ms
^^^^^^^^^^^^^^^^^^^^^^^^^^^

-The timeout for executing a Scylla statement.
+The timeout for executing a ScyllaDB statement.

* Type: Long
* Importance: Low
@@ -241,9 +241,9 @@ The timeout for executing a Scylla statement.
scylladb.ttl
^^^^^^^^^^^^

-The retention period for the data in Scylla.
-After this interval elapses, Scylla will remove these records.
-If this configuration is not provided, the Sink Connector will perform insert operations in Scylla without the TTL setting.
+The retention period for the data in ScyllaDB.
+After this interval elapses, ScyllaDB will remove these records.
+If this configuration is not provided, the Sink Connector will perform insert operations in ScyllaDB without the TTL setting.

* Type: Int
* Importance: Medium
@@ -252,8 +252,8 @@ If this configuration is not provided, the Sink Connector will perform insert op
scylladb.offset.storage.table.enable
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-If true, Kafka consumer offsets will be stored in the Scylla table.
-If false, the connector will skip writing offset information into Scylla (this might imply duplicate writes into Scylla when a task restarts).
+If true, Kafka consumer offsets will be stored in the ScyllaDB table.
+If false, the connector will skip writing offset information into ScyllaDB (this might imply duplicate writes into ScyllaDB when a task restarts).

* Type: Boolean
* Importance: Medium
@@ -262,7 +262,7 @@ If false, the connector will skip writing offset information into Scylla (this m
scylladb.max.batch.size.kb
^^^^^^^^^^^^^^^^^^^^^^^^^^

-Maximum size(in kilobytes) of a single batch consisting of Scylla operations.
+Maximum size(in kilobytes) of a single batch consisting of ScyllaDB operations.
Should be equal to ``batch_size_warn_threshold_in_kb`` and 1/10th of the ``batch_size_fail_threshold_in_kb`` configured in ``scylla.yaml``.
The default value is set to 5kb, any change in this configuration should be accompanied by a change in ``scylla.yaml``.

@@ -285,7 +285,7 @@ Specifies the maximum number of tasks to use for the connector that helps in par
topics
^^^^^^

-Specifies the name of the topics to consume data from and write to Scylla.
+Specifies the name of the topics to consume data from and write to ScyllaDB.

* Type: list
* Importance: high
diff --git a/docs/using-scylla/integrations/sink-kafka-connector.rst b/docs/using-scylla/integrations/sink-kafka-connector.rst
--- a/docs/using-scylla/integrations/sink-kafka-connector.rst
+++ b/docs/using-scylla/integrations/sink-kafka-connector.rst
@@ -1,20 +1,20 @@
-==========================================
-Shard-Aware Kafka Connector for Scylla
-==========================================
+========================================
+Shard-Aware Kafka Connector for ScyllaDB
+========================================

.. toctree::
:hidden:

kafka-connector
sink-config

-You can use the Kafka Sink Connector for Scylla to bridge Scylla and Kafa.
-The connector allows you to use Apache Kafka and the Confluent platform while taking advantage of Scylla’s underlying shard-per-core, shared-nothing architecture.
+You can use the Kafka Sink Connector for ScyllaDB to bridge ScyllaDB and Kafa.
+The connector allows you to use Apache Kafka and the Confluent platform while taking advantage of ScyllaDB’s underlying shard-per-core, shared-nothing architecture.

The following documents will get you started with the Kafka Connector:

* :doc:`Kafka Sink Connector Quickstart <kafka-connector>`
* :doc:`Kafka Sink Connector Configuration <sink-config>`
-* `Introducing the Kafka Scylla Connector <https://www.scylladb.com/2020/02/18/introducing-the-kafka-scylla-connector/>`_ - Scylla Users blog
+* `Introducing the Kafka ScyllaDB Connector <https://www.scylladb.com/2020/02/18/introducing-the-kafka-scylla-connector/>`_ - ScyllaDB Users blog


diff --git a/docs/using-scylla/local-secondary-indexes.rst b/docs/using-scylla/local-secondary-indexes.rst
--- a/docs/using-scylla/local-secondary-indexes.rst
+++ b/docs/using-scylla/local-secondary-indexes.rst
@@ -3,10 +3,10 @@ Local Secondary Indexes
===============================

Local Secondary Indexes is an enhancement to :doc:`Global Secondary Indexes <secondary-indexes>`,
-which allows Scylla to optimize workloads where the partition key of the base table and the index are the same key.
+which allows ScyllaDB to optimize workloads where the partition key of the base table and the index are the same key.

.. note::
- As of Scylla Open Source 4.0, updates for local secondary indexes are performed **synchronously**. When updates are synchronous, the client acknowledges the write
+ As of ScyllaDB Open Source 4.0, updates for local secondary indexes are performed **synchronously**. When updates are synchronous, the client acknowledges the write
operation only **after both** the base table modification **and** the view update are written.
This is important to note because the process is no longer asynchronous and the modifications are immediately reflected in the index.
In addition, if the view update fails, the client receives a write error.
@@ -100,7 +100,7 @@ The coordinator processes the request for the index and base table internally an

.. note::

- When the same table has both LSI and GSI, Scylla will automatically use the right Index for each query.
+ When the same table has both LSI and GSI, ScyllaDB will automatically use the right Index for each query.

When should you use a Local Secondary Index
...........................................
@@ -113,7 +113,7 @@ More information
* :doc:`Global Secondary Indexes </using-scylla/secondary-indexes/>`
* :doc:`CQL Reference </cql/secondary-indexes/>` - CQL Reference for Secondary Indexes

-The following courses are available from Scylla University:
+The following courses are available from ScyllaDB University:

* `Materialized Views and Secondary Indexes <https://university.scylladb.com/courses/data-modeling/lessons/materialized-views-secondary-indexes-and-filtering/>`_
* `Local Secondary Indexes <https://university.scylladb.com/courses/data-modeling/lessons/materialized-views-secondary-indexes-and-filtering/topic/local-secondary-indexes-and-combining-both-types-of-indexes/>`_
diff --git a/docs/using-scylla/lwt.rst b/docs/using-scylla/lwt.rst
--- a/docs/using-scylla/lwt.rst
+++ b/docs/using-scylla/lwt.rst
@@ -91,7 +91,7 @@ considered present:
+-------------+-----+------+------+------+

It is OK to us a comparison with ``NULL`` in a condition.
-But since ``NULL`` value and missing value in Scylla are
+But since ``NULL`` value and missing value in ScyllaDB are
indistinguishable, conditions which compare with ``NULL``
will return the same result when applied to both
missing rows or existing rows with ``NULL`` cells:
@@ -118,8 +118,8 @@ evaluate the condition of the "missing" regular row:
| True | 2 |
+-------------+-----+

-Scylla Paxos
-============
+ScyllaDB Paxos
+==============

The statements with an ``IF`` clause use a different write path, employing the Paxos consensus algorithm (see `figure`_) to ensure linearizability of the execution history.

@@ -139,7 +139,7 @@ allows the coordinator to proceed with reading and updating a row
without interference. The state of the protocol is persisted in
system.paxos table, which is local to each replica.

-Unlike Cassandra, Scylla piggy-backs the old version of the row on
+Unlike Cassandra, ScyllaDB piggy-backs the old version of the row on
response to "Prepare" request, so reading a row doesn't require
a separate message exchange.

@@ -166,10 +166,10 @@ from system.paxos.
The size of the quorum impacts how many acknowledgements the
coordinator must get before proceeding to the next round or
responding to the client. For Prepare and Accept, it is configured
-with ``SERIAL CONSISTENCY`` setting. For Learn, Scylla's eventual
+with ``SERIAL CONSISTENCY`` setting. For Learn, ScyllaDB's eventual
``CONSISTENCY`` is used. Pruning is done in the background.

-Key differences between Scylla and Cassandra Paxos implementations
+Key differences between ScyllaDB and Cassandra Paxos implementations
are in collapsing prepare and read actions into a single round, and
also introducing an extra asynchronous "prune" round, which keeps
system.paxos table small and thus reduces write amplification
@@ -186,7 +186,7 @@ Batch statements
The entire conditional batch has an isolated view of the database and is executed using all-or-nothing principle. In many ways, conditional batches are similar to ACID transactions in relational databases, with the exception that a batch is executed only if **all conditions** in **all statements** are **true**, if not it does nothing.

.. A number of new database usage patterns emerge when lightweight transactions are part of the database features portfolio.
-.. Scylla now can not only handle large volumes of data for analytics, event history and such, but serve as a reliable and efficient back-end for web, mobile, IIoT and cybersecurity applications.
+.. ScyllaDB now can not only handle large volumes of data for analytics, event history and such, but serve as a reliable and efficient back-end for web, mobile, IIoT and cybersecurity applications.

Reading with Paxos
==================
@@ -539,5 +539,5 @@ Other limitations are more minor:
Additional Information
======================

-* :doc:`How does Scylla LWT Differ from Apache Cassandra ? </kb/lwt-differences>` - How does Scylla's implementation of lightweight transactions differ from Apache Cassandra?
+* :doc:`How does ScyllaDB LWT Differ from Apache Cassandra ? </kb/lwt-differences>` - How does ScyllaDB's implementation of lightweight transactions differ from Apache Cassandra?
* :doc:`How to Change gc_grace_seconds for a Table </kb/gc-grace-seconds/>` - How to change the ``gc_grace_seconds`` parameter for a table
diff --git a/docs/using-scylla/mig-tool-review.rst b/docs/using-scylla/mig-tool-review.rst
--- a/docs/using-scylla/mig-tool-review.rst
+++ b/docs/using-scylla/mig-tool-review.rst
@@ -3,7 +3,7 @@ ScyllaDB Migration Tools: An Overview
=======================================

The following migration tools are available for migrating to ScyllaDB from compatible databases,
-such as Apache Cassandra, or from other Scylla clusters (ScyllaDB Open Source or Enterprise):
+such as Apache Cassandra, or from other ScyllaDB clusters (ScyllaDB Open Source or Enterprise):

* From SSTable to SSTable
- Using nodetool refresh, :ref:`Load and Stream <nodetool-refresh-load-and-stream>` option.
@@ -13,5 +13,5 @@ such as Apache Cassandra, or from other Scylla clusters (ScyllaDB Open Source or
* From CQL to CQL
- `Spark Migrator <https://github.com/scylladb/scylla-migrator>`_. The Spark migrator allows you to easily transform the data before pushing it to the destination DB.

-* From DynamoDB to Scylla Alternator
+* From DynamoDB to ScyllaDB Alternator
- `Spark Migrator <https://github.com/scylladb/scylla-migrator>`_. The Spark migrator allows you to easily transform the data before pushing it to the destination DB.
diff --git a/docs/using-scylla/migrate-scylla.rst b/docs/using-scylla/migrate-scylla.rst
--- a/docs/using-scylla/migrate-scylla.rst
+++ b/docs/using-scylla/migrate-scylla.rst
@@ -5,18 +5,18 @@ Migrate to ScyllaDB
:maxdepth: 2
:hidden:

- Migration Process from Cassandra to Scylla </operating-scylla/procedures/cassandra-to-scylla-migration-process/>
- Scylla and Apache Cassandra Compatibility</using-scylla/cassandra-compatibility/>
+ Migration Process from Cassandra to ScyllaDB </operating-scylla/procedures/cassandra-to-scylla-migration-process/>
+ ScyllaDB and Apache Cassandra Compatibility</using-scylla/cassandra-compatibility/>
Migration Tools Overview <mig-tool-review>

.. panel-box::
- :title: Migrate to Scylla
+ :title: Migrate to ScyllaDB
:id: "getting-started"
:class: my-panel

- * :doc:`Migration Process from Cassandra to Scylla </operating-scylla/procedures/cassandra-to-scylla-migration-process/>`
- * :doc:`Scylla and Apache Cassandra Compatibility</using-scylla/cassandra-compatibility/>`
- * Migrating to Scylla `lesson <https://university.scylladb.com/courses/scylla-operations/lessons/migrating-to-scylla/>`_ on Scylla University
+ * :doc:`Migration Process from Cassandra to ScyllaDB </operating-scylla/procedures/cassandra-to-scylla-migration-process/>`
+ * :doc:`ScyllaDB and Apache Cassandra Compatibility</using-scylla/cassandra-compatibility/>`
+ * Migrating to ScyllaDB `lesson <https://university.scylladb.com/courses/scylla-operations/lessons/migrating-to-scylla/>`_ on ScyllaDB University

.. panel-box::
:title: Migration Tools
diff --git a/docs/using-scylla/secondary-indexes.rst b/docs/using-scylla/secondary-indexes.rst
--- a/docs/using-scylla/secondary-indexes.rst
+++ b/docs/using-scylla/secondary-indexes.rst
@@ -2,11 +2,11 @@
Global Secondary Indexes
===============================

-The data model in Scylla partitions data between cluster nodes using a partition key, which is defined in the database schema. This is an efficient way to look up rows because you can find the node hosting the row by hashing the partition key.
+The data model in ScyllaDB partitions data between cluster nodes using a partition key, which is defined in the database schema. This is an efficient way to look up rows because you can find the node hosting the row by hashing the partition key.

However, this also means that finding a row using a non-partition key requires a full table scan which is inefficient.

-**Global Secondary indexes** (named "Secondary indexes" for the rest of this doc) are a mechanism in Scylla which allows efficient searches on non-partition keys by creating an index. They are indexes created on columns other than the entire partition key, where each secondary index indexes *one* specific column. A secondary index can index a column used in the partition key in the case of a composite partition key.
+**Global Secondary indexes** (named "Secondary indexes" for the rest of this doc) are a mechanism in ScyllaDB which allows efficient searches on non-partition keys by creating an index. They are indexes created on columns other than the entire partition key, where each secondary index indexes *one* specific column. A secondary index can index a column used in the partition key in the case of a composite partition key.

Secondary indexes provide the following advantages:

@@ -16,16 +16,16 @@ Secondary indexes provide the following advantages:

3. Updates can be more efficient with secondary indexes than materialized views because only changes to the primary key and indexed column cause an update in the index view.

-What’s more, the size of an index is proportional to the size of the indexed data. As data in Scylla is distributed to multiple nodes, it’s impractical to store the whole index on a single node, as it limits the size of the index to the capacity of a single node, not the capacity of the whole cluster.
+What’s more, the size of an index is proportional to the size of the indexed data. As data in ScyllaDB is distributed to multiple nodes, it’s impractical to store the whole index on a single node, as it limits the size of the index to the capacity of a single node, not the capacity of the whole cluster.

-For this reason, secondary indexes in Scylla are **global** rather than local. With global indexing, a materialized view is created for each index. This :doc:`materialized view </using-scylla/materialized-views/>` has the indexed column as a partition key and primary key (partition key and clustering keys) of the indexed row as clustering keys.
+For this reason, secondary indexes in ScyllaDB are **global** rather than local. With global indexing, a materialized view is created for each index. This :doc:`materialized view </using-scylla/materialized-views/>` has the indexed column as a partition key and primary key (partition key and clustering keys) of the indexed row as clustering keys.

Secondary indexes created globally provide a further advantage: you can use the value of the indexed column to find the corresponding index table row in the cluster so reads are scalable. Note however, that with this approach, writes are slower than with local indexing because of the overhead required to keep the indexed view up to date.

How Secondary Index Queries Work
................................

-Scylla breaks indexed queries into two parts:
+ScyllaDB breaks indexed queries into two parts:

1. a query on the index table to retrieve partition keys for the indexed table, and
2. a query to the indexed table using the retrieved partition keys.
@@ -66,7 +66,7 @@ Let’s populate it with some test data:
INSERT INTO buildings(name,city,height) VALUES ('China Zun','Beijing',528);
INSERT INTO buildings(name,city,height) VALUES ('Taipei 101','Taipei',508);

-Note that if we try to query on a column (the part after the ``WHERE`` clause) in a Scylla table that isn’t part of the primary key, we’ll see that this is not permitted. For example:
+Note that if we try to query on a column (the part after the ``WHERE`` clause) in a ScyllaDB table that isn’t part of the primary key, we’ll see that this is not permitted. For example:

.. code-block:: cql

@@ -134,7 +134,7 @@ More information
* :doc:`Local Secondary Indexes </using-scylla/local-secondary-indexes/>`
* :doc:`CQL Reference </cql/secondary-indexes/>` - CQL Reference for Secondary Indexes

-The following courses are available from Scylla University:
+The following courses are available from ScyllaDB University:

* `Materialized Views and Secondary Indexes <https://university.scylladb.com/courses/data-modeling/lessons/materialized-views-secondary-indexes-and-filtering/>`_
* `Global Secondary Indexes <https://university.scylladb.com/courses/data-modeling/lessons/materialized-views-secondary-indexes-and-filtering/topic/global-secondary-indexes/>`_
diff --git a/docs/using-scylla/tracing.rst b/docs/using-scylla/tracing.rst
--- a/docs/using-scylla/tracing.rst
+++ b/docs/using-scylla/tracing.rst
@@ -3,7 +3,7 @@ Tracing



-Tracing is a ScyllaDB tool meant to help debugging and analyzing internal flows in the server. There are three types of tracing you can use with Scylla:
+Tracing is a ScyllaDB tool meant to help debugging and analyzing internal flows in the server. There are three types of tracing you can use with ScyllaDB:

* **User Defined CQL query** - One example of such a flow is CQL request processing. By placing a flag inside a CQL query, you can start tracing.
* **Probalistic Tracing** randomly chooses a request to be traced with some defined probability.
@@ -149,8 +149,8 @@ Traces are created in the context of a **tracing session**. For instance, if we
* ``duration``: the total duration of this tracing session in microseconds
* ``parameters``: this map contains string pairs that describe the query. This may include *query string* or *consistency level*.
* ``request``: a short string describing the current query, like "Execute CQL3 query".
-* ``request_size``: size of the request (available from Scylla 3.0).
-* ``response_size``: size of the response (available from Scylla 3.0).
+* ``request_size``: size of the request (available from ScyllaDB 3.0).
+* ``response_size``: size of the response (available from ScyllaDB 3.0).
* ``started_at``: a timestamp taken when the tracing session has begun.

``events`` table column descriptions
@@ -323,7 +323,7 @@ Therefore all of them are likely going to hit the Slow Query threshold and get l
If queueing is caused by some particularly heavy request, we would like to be able to filter this request from those that got logged due to a long queueing.
We have recently added tools that would help us do that:

-New columns were added to `system_traces.sessions`_ (available from Scylla 3.0)
+New columns were added to `system_traces.sessions`_ (available from ScyllaDB 3.0)

* ``request_size``
* ``response_size``
@@ -401,4 +401,4 @@ This procedure can also be used to collect tracing data in order to view which q
COPY system_traces.events TO '/tmp/tracing/events.out' WITH HEADER = TRUE;


-If you are sending this data to Scylla for help, follow the directions in :ref:`How to Report a Scylla Problem <report-performance-problem>`.
+If you are sending this data to ScyllaDB for help, follow the directions in :ref:`How to Report a ScyllaDB Problem <report-performance-problem>`.

Commit Bot

<bot@cloudius-systems.com>
unread,
Jul 1, 2024, 3:58:43 PMJul 1
to scylladb-dev@googlegroups.com, Tzach Livyatan
From: Tzach Livyatan <tz...@scylladb.com>
Committer: Tomasz Grabiec <tgra...@scylladb.com>
Branch: master
Reply all
Reply to author
Forward
0 new messages