trouble with queries in cqlsh

164 views
Skip to first unread message

pedro.teixeira@movvo.com

<pedro.teixeira@movvo.com>
unread,
Mar 23, 2016, 8:25:40 AM3/23/16
to ScyllaDB users

I have one node up and running for some tests.
12HT cores, 64GB ram, 5TB SSD's in raid0 with xfs and kernel 4.5 on ubuntu 14.04

scylladb nigthly


We did some simple stress tests with cassandra-stress and we can scale the read iops if we increase the number of threads but it seems that there is some kind of limit on the performance when running less threads. I will put this on another post.

When making a query scylla times out.










[cqlsh 5.0.1 | Cassandra 2.1.8 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh> SELECT count(*) from MY_KEYSPACE.MY_CF limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh> SELECT count(*) from MY_KEYSPACE.MY_CF limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh> SELECT count(*) from MY_KEYSPACE.MY_CF2 limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh> SELECT count(*) from keyspace1.standard1  limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh> SELECT count(*) from keyspace1.standard1  limit 10;

 count
—---—
    10

(1 rows)
cqlsh> SELECT count(*) from keyspace1.standard1  limit 100;

 count
—---—
   100

(1 rows)
cqlsh> SELECT count(*) from keyspace1.standard1  limit 1000;

 count
—---—
  1000

(1 rows)
cqlsh> SELECT count(*) from keyspace1.standard1  limit 10000;

 count
—---—
 10000

(1 rows)
cqlsh> SELECT count(*) from keyspace1.standard1  limit 100000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh>

Avi Kivity

<avi@scylladb.com>
unread,
Mar 23, 2016, 8:34:23 AM3/23/16
to scylladb-users@googlegroups.com


On 03/23/2016 02:25 PM, pedro.t...@movvo.com wrote:

I have one node up and running for some tests.
12HT cores, 64GB ram, 5TB SSD's in raid0 with xfs and kernel 4.5 on ubuntu 14.04

scylladb nigthly


We did some simple stress tests with cassandra-stress and we can scale the read iops if we increase the number of threads but it seems that there is some kind of limit on the performance when running less threads. I will put this on another post.


That's expected, with low concurrency it is hard to load many cores.



When making a query scylla times out.










[cqlsh 5.0.1 | Cassandra 2.1.8 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh> SELECT count(*) from MY_KEYSPACE.MY_CF limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh> SELECT count(*) from MY_KEYSPACE.MY_CF limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh> SELECT count(*) from MY_KEYSPACE.MY_CF2 limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh> SELECT count(*) from keyspace1.standard1  limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP

This is also expected, reading 100M rows cannot be achieved under the default timeout.


cqlsh> SELECT count(*) from keyspace1.standard1  limit 10;

 count
—---—
    10

(1 rows)
cqlsh> SELECT count(*) from keyspace1.standard1  limit 100;

 count
—---—
   100

(1 rows)
cqlsh> SELECT count(*) from keyspace1.standard1  limit 1000;

 count
—---—
  1000

(1 rows)
cqlsh> SELECT count(*) from keyspace1.standard1  limit 10000;

 count
—---—
 10000

(1 rows)
cqlsh> SELECT count(*) from keyspace1.standard1  limit 100000;
OperationTimedOut: errors={}, last_host=MY_IP


You can try to observe the io load with vmstat or iostat (or with scylla-monitoring).  Such queries are not handled well with the Scylla data model (the same applies to Cassandra).  The data model is geared towards accessing a single primary key in a query.

cqlsh>
--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/7dacef9c-4bd9-4614-ab2a-a27c8459b5af%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Tzach Livyatan

<tzach@scylladb.com>
unread,
Mar 23, 2016, 8:41:01 AM3/23/16
to ScyllaDB users
On Wed, Mar 23, 2016 at 2:34 PM, Avi Kivity <a...@scylladb.com> wrote:


On 03/23/2016 02:25 PM, pedro.t...@movvo.com wrote:

I have one node up and running for some tests.
12HT cores, 64GB ram, 5TB SSD's in raid0 with xfs and kernel 4.5 on ubuntu 14.04

scylladb nigthly


We did some simple stress tests with cassandra-stress and we can scale the read iops if we increase the number of threads but it seems that there is some kind of limit on the performance when running less threads. I will put this on another post.


That's expected, with low concurrency it is hard to load many cores.


When making a query scylla times out.










[cqlsh 5.0.1 | Cassandra 2.1.8 | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cqlsh> SELECT count(*) from MY_KEYSPACE.MY_CF limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh> SELECT count(*) from MY_KEYSPACE.MY_CF limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh> SELECT count(*) from MY_KEYSPACE.MY_CF2 limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP
cqlsh> SELECT count(*) from keyspace1.standard1  limit 100000000;
OperationTimedOut: errors={}, last_host=MY_IP

This is also expected, reading 100M rows cannot be achieved under the default timeout.

Adding to Avi comment, there is no straight forward way to count all rows in Scylla (or Cassandra)
In depth explanation is available in the following link, most of it apply to both Cassandra and Scylla

 
Reply all
Reply to author
Forward
0 new messages