--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/56C73534.5030305%40scylladb.com.
On Feb 19, 2016 21:24, "Dor Laor" <d...@scylladb.com> wrote:
>
> On Fri, Feb 19, 2016 at 5:31 PM, Avi Kivity <a...@scylladb.com> wrote:
>>
>> On 02/19/2016 04:45 PM, w.fakh...@gmail.com wrote:
>>>
>>> Hi,
>>>
>>> I've recently ran some benchmarks on Scylla to assess the performance gains from using DPDK. Counterintuitively, with DPDK enabled, my results showed an overall degraded performance. The reported average latencies are almost 35% higher than when using posix networking. I was anticipating latency improvements along the lines of the 1st graph in these results. As such, I'd be interested to hear if someone has any insights as to what might have caused these seemingly unexpected results (or an explanation as to why they might be sensible).
>>>
>>> For reference, the setup for my experiments was as follows:
>>> 1-client node, 1-server node with Intel Corporation 82599ES 10-Gigabit NICs. Both the client and server nodes are Dell PowerEdge R630 servers with 16 physical cores (32 with hyperthreading). I used cassandra-stress to run my benchmarks and configured it with 500 generator threads. The workload was comprised of single-key reads and ran against a column family with 1 mil rows.
>>>
>>
>> Hi,
>>
>> Benchmarking with one client is problematic, because the Java driver uses a small number of connections. This creates imbalance on the shards and this reduces performance. Even with a larger number of connections, the hashing policy that we use to direct connections to shards is not perfect, so again some shards are overloaded.
>>
>> We plan to augment the hash-based policy we use now with Flow Director. This feature allows us to direct individual connections to individual cores and thus have fine-grained load balancing.
>>
>> You can check whether this is the problem by looking at the transport.connections.current counter on each shard. You can try adding more loaders (and running multiple cassandra-stress instances on each loader).
>>
>> Or it might be something else. Is scylla loaded? Look at the reactor.*.load gauge on each shard.
>
>
> On top of all these (which I think with a single client it is in 95% confidence the reason), we
> added power save mode which takes the server off poll-mode. We noticed it's hurts
> Scylla when the system isn't loaded (your scenario, single client can't load the system).
> It may have a larger negative effect with dpdk.
> You can try to add --poll-mode to the command line to get around it.
>
Actually, sleep mode is automatically disabled with the native stack.
>
>>
>> --
>> You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
>> To post to this group, send email to scyllad...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/scylladb-users.
>> To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/56C73534.5030305%40scylladb.com.
>>
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
> To post to this group, send email to scyllad...@googlegroups.com.
> Visit this group at https://groups.google.com/group/scylladb-users.
> To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/CAKUaUn7KrvgTgG6it3MXmjo3HVkc_imYA%3Dk5xDE41WRKqejeLg%40mail.gmail.com.