--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/e4dfae1b-8c15-44ff-a7cd-6720b0f51917%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
(Sachin is not able to post for some reason. Posting on his behalf.)
We have already gone through these articles.Is 185K rows per second a decent speed on such a settup? Do you have any benchmark numbers for scans which can refer especially against Cassandra?
On Friday, March 23, 2018 at 9:51:58 AM UTC+5:30, Sachin janani wrote:We are running some benchmark on ScyllaDB by executing some queries and we found that select count(*) queries take very long time to complete.
Following are the details of scylla cluster:
Number of Nodes: 3
RAM : 64G on each node
Number of CPU cores on each node: 8
Number of Rows in table : 311 Million
Number of columns : 23
Size of table as shown by nodetool stats: 300GB approx. across 3 nodes
Time taken to execute select count(*) queries from CQLSH: 1.1 hour
Time taken to execute select count(*) queries with apache spark using spark-cassandra connector: 28 mins (i.e around 185K rows per second)
CPU consumption on scylla nodes was almost 100%.
Full memory was consumed by scylla while ingestion of rows
Note: We have setup XFS partition for scylla manually i.e not using scylla setup scripts.
Are there any performance tuning that we are missing ?
As compared to Cassandra, what is the perf. difference that we should expect for table scans and point queries.Also can anyone point me to the **READ** benchmarks for large scylla tables?
--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/f420363b-f109-4ed2-bf4e-0f9e04383464%40googlegroups.com.
We are running some benchmark on ScyllaDB by executing some queries and we found that select count(*) queries take very long time to complete.
Following are the details of scylla cluster:
Number of Nodes: 3
RAM : 64G on each node
Number of CPU cores on each node: 8
Number of Rows in table : 311 Million
Number of columns : 23
Size of table as shown by nodetool stats: 300GB approx. across 3 nodes
Time taken to execute select count(*) queries from CQLSH: 1.1 hour
Time taken to execute select count(*) queries with apache spark using spark-cassandra connector: 28 mins (i.e around 185K rows per second)
CPU consumption on scylla nodes was almost 100%.
Full memory was consumed by scylla while ingestion of rows
Note: We have setup XFS partition for scylla manually i.e not using scylla setup scripts.
Are there any performance tuning that we are missing ?
As compared to Cassandra, what is the perf. difference that we should expect for table scans and point queries.Also can anyone point me to the **READ** benchmarks for large scylla tables?