Making sysbench to cluster shows better performance with one node than three

259 views
Skip to first unread message

Ruben O

unread,
Mar 18, 2015, 11:51:03 AM3/18/15
to percona-d...@googlegroups.com
Hi

I posted some days ago about some issue with the deploy of Xtradb. Finally I was able to deploy it with Vagrant+Virtual Box+Ansible and do the benchmark. To run the benchmark I installed into another host sysbench suite.

I have to say that for balance the queries I setup before another ubuntu with haproxy. 

So then I run 2 test. One test, the first, with 3 nodes in Xtradbcluster and the test were prepared with this parameters:

sysbench \
--db-driver=mysql \
--mysql-table-engine=innodb \
--oltp-table-size=2000000 \
--mysql-host=192.168.33.100 \
--mysql-port=3306 \
--mysql-user=root \
--mysql-password=xxxxx \
--test=/usr/share/sysbench/tests/db/oltp.lua \
prepare

then I ran the script with "run"

sysbench \
--db-driver=mysql \
--num-threads=8 \
--max-requests=50000 \
--oltp-table-size=2000000 \
--oltp-test-mode=complex \
--test=/usr/share/sysbench/tests/db/oltp.lua \
--report-interval=1 \
--mysql-table-engine=innodb \
--mysql-host=192.168.33.100 \
--mysql-port=3306 \
--mysql-user=root \
--mysql-password=\
run

The first results that I got were this:

queries performed: read: 700154 write: 200035 other: 100011 total: 1000200 transactions: 50000 (42.51 per sec.) read/write requests: 900189 (765.32 per sec.) other operations: 100011 (85.03 per sec.) ignored errors: 11 (0.01 per sec.) reconnects: 0 (0.00 per sec.)


Then I thought, "What If I turn down 2 nodes and make the test with just one MySQL instance?" Then I just turn down mysql services in node 3 and node2 (obviously, this was noticed by haproxy that mark with red colour two nodes). And then I run the tests again and results were:

OLTP test statistics: queries performed: read: 700000 write: 200000 other: 100000 total: 1000000 transactions: 50000 (111.94 per sec.) read/write requests: 900000 (2014.88 per sec.) other operations: 100000 (223.88 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.)

Why all the parameters shows better performance in the second test? Because with two nodes down we avoid the TCP latency? I really do not understand.

Is something like, Xtradb cluster it's only about Availability and not Scalability ?

Please share your comments are very important to me !

Have a nice day,

R

Wagner Bianchi

unread,
Mar 18, 2015, 3:43:12 PM3/18/15
to percona-d...@googlegroups.com
So, basically, when you shutdown the nodes 2 and 3, you left the cluster to certify and apply transactions in just one node and this one node now is the origin of all transactions. As fas as I understand the latency or overhead Galera Cluster presents is connected to the fact of having lots of nodes to certify and apply transactions. When you setup a cluster of just one node, you mitigate the overhead to the minimum.



--
Wagner Bianchi, +55.31.8654.9510
Oracle ACE Director, MySQL Certified Professional
Skype: wbianchijr

--
You received this message because you are subscribed to the Google Groups "Percona Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to percona-discuss...@googlegroups.com.
To post to this group, send email to percona-d...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

alexey.y...@galeracluster.com

unread,
Mar 19, 2015, 9:36:57 AM3/19/15
to percona-d...@googlegroups.com, Ruben O
Firstly, can you confirm that each node VM was running on a dedicated HW
host? Or were they sharing CPU and IO of a single machine?

Secondly, replication in PXC is synchronous. This adds some overhead.
Running nodes inside VMs seriously increases that overhead. Now if your
workload is relatively lightweight (and 2M rows is relatively
lightweight by todays standards), then the resulting replication
overhead can seriously degrade performance (but hardly that much)

Thirdly, correct cluster configuration can make it or break it. Number
of slave threads, flow control limits, etc.

Finally, correct benchmark configuration is equally important. Say, if
that load does not max out a single node, you can't expect any
scalability with 3, only degradation due to additional overheads. For
example you may want to try to increase the number of concurrent clients
to 24 and likely you will see different results (although this alone is
unlikely to show any scalability)

On 2015-03-18 17:51, Ruben O wrote:
> Hi
>
> I posted some days ago about some issue with the deploy of Xtradb.
> Finally
> I was able to deploy it with Vagrant+Virtual Box+Ansible and do the
> benchmark. To run the benchmark I installed into another host sysbench
> suite.
>
> I have to say that for balance the queries I setup before another
> ubuntu
> with haproxy.
>
> So then I run 2 test. One test, the first, with 3 nodes in
> Xtradbcluster
> and the test were prepared with this parameters:
>
> *sysbench \*
> *--db-driver=mysql \*
> *--mysql-table-engine=innodb \*
> *--oltp-table-size=2000000 \*
> *--mysql-host=192.168.33.100 \*
> *--mysql-port=3306 \*
> *--mysql-user=root \*
> *--mysql-password=xxxxx \*
> *--test=/usr/share/sysbench/tests/db/oltp.lua \*
> *prepare*
>
> then I ran the script with "run"
>
> *sysbench \*
> *--db-driver=mysql \*
> *--num-threads=8 \*
> *--max-requests=50000 \*
> *--oltp-table-size=2000000 \*
> *--oltp-test-mode=complex \*
> *--test=/usr/share/sysbench/tests/db/oltp.lua \*
> *--report-interval=1 \*
> *--mysql-table-engine=innodb \*
> *--mysql-host=192.168.33.100 \*
> *--mysql-port=3306 \*
> *--mysql-user=root \*
> *--mysql-password=\*
> *run*

Ruben O

unread,
Mar 19, 2015, 9:53:34 AM3/19/15
to percona-d...@googlegroups.com, rortiz...@gmail.com
Hi Alexey

yes, if you read my first post you'll see that I made my test in a virtual environment with Vagrant and Virtual Box. So, yes, each node runs INSIDE virtual machine. So they are sharing IO,CPU,RAM, etc.

About overhead, I understand is quite normal have little overhead due to synchronous replication.

About correct configuration... I just started few days ago so you can imagine my configuration of my cluster is as easy as possible :)

I will try to run sysbench with highest concurrent connections and see what it happens. Don't take it as I'm complaining about Xtradb, not at all! In fact, I'm very happy with it. I used to run MASTER-SLAVE replication to balance READS (I bought Percona O'Reilly book) and the MASTER-MASTER replication was for me a quimera and now it seems very reasonable to deploy it in real world.

Thanks all you guys, your comments are very appreciated!

alexey.y...@galeracluster.com

unread,
Mar 19, 2015, 12:00:39 PM3/19/15
to percona-d...@googlegroups.com, rortiz...@gmail.com
On 2015-03-19 15:53, Ruben O wrote:
> Hi Alexey
>
> yes, if you read my first post you'll see that I made my test in a
> virtual
> environment with Vagrant and Virtual Box. So, yes, each node runs
> INSIDE
> virtual machine. So they are sharing IO,CPU,RAM, etc.

I think scale-out is about utilizing more resources, not sharing
existing ones between more processes. If your VMs are sharing resources
of a single HW host, it is only natural that you see lower numbers with
3 nodes.
Reply all
Reply to author
Forward
0 new messages