0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47
If I want to reduce the cpuset to half by picking one from each core. These are the steps in am following:
1. Stop scylla server
2. update cpuset.conf
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23"
3. run iotune with the new cpuset to update io.conf
iotune --evaluation-directory /root/data_io --format envfile --options-file /etc/scylla.d/io.conf --cpuset "0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23"
4. restart scylla server.
Are these steps enough ?
Thanks, Sid
--Thanks, Sid
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/2fc8624d-351a-46b7-962e-6417ee5b198a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
On Fri, Jan 6, 2017 at 10:31 AM, sid via ScyllaDB users <scyllad...@googlegroups.com> wrote:Hi,We are trying to setup scylla with a limited cpuset in our 6 node configuration.Here is what the default cpuset looks like with all 48 cpus in the node:0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47
If I want to reduce the cpuset to half by picking one from each core. These are the steps in am following:
1. Stop scylla server
2. update cpuset.conf
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23"
3. run iotune with the new cpuset to update io.conf
iotune --evaluation-directory /root/data_io --format envfile --options-file /etc/scylla.d/io.conf --cpuset "0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23"
4. restart scylla server.
Are these steps enough ?
In general yes. The only thing I'm not sure about is whether the cpu you manually selected arealigned with one hyper thread per core. It could be that you just reduced the core count.You'll need to check the hardware topology in your machine. I don't know how to do it,I bet a simple google search will do or someone with the knowledge will jump on the thread soon.
Thanks, Sid
--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.
Inline.
On Friday, January 6, 2017 at 10:45:00 AM UTC-8, Dor Laor wrote:On Fri, Jan 6, 2017 at 10:31 AM, sid via ScyllaDB users <scyllad...@googlegroups.com> wrote:Hi,We are trying to setup scylla with a limited cpuset in our 6 node configuration.Here is what the default cpuset looks like with all 48 cpus in the node:0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47
If I want to reduce the cpuset to half by picking one from each core. These are the steps in am following:
1. Stop scylla server
2. update cpuset.conf
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23"
3. run iotune with the new cpuset to update io.conf
iotune --evaluation-directory /root/data_io --format envfile --options-file /etc/scylla.d/io.conf --cpuset "0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23"
4. restart scylla server.
Are these steps enough ?
In general yes. The only thing I'm not sure about is whether the cpu you manually selected arealigned with one hyper thread per core. It could be that you just reduced the core count.You'll need to check the hardware topology in your machine. I don't know how to do it,I bet a simple google search will do or someone with the knowledge will jump on the thread soon.I did the check :). The cpus are selected in such a way that only one HT per core is selected.
A follow up questions would be:Do I have to build a new schema ? or can I just restart scylla and carry on with whatever schema is in there.
--Thanks, Sid
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-user...@googlegroups.com.
To post to this group, send email to scyllad...@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/2fc8624d-351a-46b7-962e-6417ee5b198a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "ScyllaDB users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scylladb-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/785e01e7-8873-432b-b364-dec906d78086%40googlegroups.com.
As part of scylla server startup, the scylla-prepare script also sets up the NIC (eg: IRQ affinity, setup_xps etc). These routines by default ignore the cpuset of the scylla server and just use all the visible cpus (eg: hwloc-distrib <nic irq count | xps count> etc)I'm guessing that this may not be ideal but it _may_ not be _severely_ detrimental to the performance either. We may have to perform certain experiments to understand if this is really a concern or not in our setup.
On Mon, Jan 9, 2017 at 4:27 PM, Krishnanand Thommandra <kthom...@arista.com> wrote:As part of scylla server startup, the scylla-prepare script also sets up the NIC (eg: IRQ affinity, setup_xps etc). These routines by default ignore the cpuset of the scylla server and just use all the visible cpus (eg: hwloc-distrib <nic irq count | xps count> etc)I'm guessing that this may not be ideal but it _may_ not be _severely_ detrimental to the performance either. We may have to perform certain experiments to understand if this is really a concern or not in our setup.Good point. Vlad, can we easily improve the MQ path in the script to utilize certain cores? It's just a bitwise operation
You received this message because you are subscribed to a topic in the Google Groups "ScyllaDB users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scylladb-users/Y8lwLL1Z5po/unsubscribe.
To unsubscribe from this group and all its topics, send an email to scylladb-users+unsubscribe@googlegroups.com.
To post to this group, send email to scylladb-users@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scylladb-users/CAGeuxsyvj7-ZAdE-Cok4cVev%2BZ%2BjSd%2BSqHhP6JBA5VYOAaTc6Q%40mail.gmail.com.
Thanks Vlad.
In addition to this change, I think for our Intel NIC, we'll need to setup the RSS table appropriately so that proper rx queues are used. The tx queues would get limited and setup appropriately by the script.
Couple of more questions on the issue of cpuset:
1. running iotune takes some time(2-4 mins). To save this time when I am trying to play with different cpusets, once I know what the parameters in io.conf looks for a particular cpuset, is it okay to just update io.conf and cpuset.conf directly and reset scyalla-server - without running iotune over and over.
2. When I run iotune for 6 CPUsiotune --evaluation-directory /root/data_io --format envfile --options-file /etc/scylla.d/io.conf --cpuset "0,1,2,12,13,14"
io.conf only hasSEASTAR_IO="--max-io-requests=51"i.e. the num-io-queues is missing, is this expected ?
On 01/10/2017 01:57 PM, Krishnanand Thommandra wrote:
Thanks Vlad.
In addition to this change, I think for our Intel NIC, we'll need to setup the RSS table appropriately so that proper rx queues are used. The tx queues would get limited and setup appropriately by the script.
If you are using Intel's 10G NIC managed by the ixgbe driver you need to know that their RSS is limited by 16 queues, which means that higher queues will not get RSS filtered traffic.
The script attached to my previous email is going to spread all IRQs between the cores you provide. So, no need to tweak RSS table - only the number of Rx queues - limit them by 16 and all of them are going to be RSS queues that will be properly configured by default.
Our script configures XPS on all present queues for egress so you are going to be covered in this aspect.