Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

MySQL vs DBF speed

614 views
Skip to first unread message

Tim Jacob

unread,
Jan 27, 2010, 12:43:27 PM1/27/10
to
Has anyone done a speed comparison of MySQL vs standard DBF files?

I know that MySQL would dramatically cut down the bandwidth used over
the wire, but would also take up more server resources.

Is there a trade-off point at which one is faster than the other
depending on the amount of users?

I'm not talking about other advantages such as lack of data corruption.

Thanks

cul...@gmail.com

unread,
Jan 27, 2010, 1:29:46 PM1/27/10
to
Tim i use Postgresql via sqlrdd and speed is very near dbf.
Regards
Luiz

Rene Flores

unread,
Jan 27, 2010, 1:29:53 PM1/27/10
to
Tim:

It all depends on the storage method you use in MySQL and the hardware
you have on the other side of the wire.

In my experience MySQL is fast, but DBFs managed through Advantage
Database Server can be as fast as MySQL or even a little bit faster,
even over Internet.

Under 64 bits OS on the server side the experience can be great.

Regards

Rene Flores
http://www.ciber-tec.com

Jacob escribi�:

fatfat

unread,
Jan 27, 2010, 10:27:01 PM1/27/10
to
dbf is the fastest on local lan, but not from Internet except using ADS
sql is second to dbf on local lan, but first from internet with added many
dbms features

"Tim Jacob" <tja...@nb.aibn.nospam.com> wrote in message
news:4b607b45$0$12428$9a56...@news.aliant.net...

dlzc

unread,
Jan 28, 2010, 9:42:20 PM1/28/10
to
Dear Tim Jacob:

On Jan 27, 10:43 am, Tim Jacob <tja...@nb.aibn.nospam.com> wrote:
> Has anyone done a speed comparison of MySQL vs
> standard DBF files?

From a post on the old newsgroup, dated 2004oct08, by Phil Ide, titled
"MySQL"
<QUOTE>
> ps. i find using the SQLRDD (MySQL) is MUCH slower
> than my DBFNTX in doing some field testing.

If you are using DBF-like navigation of the tables, e.g.

USE myTable
GO TOP
WHILE !EOF
? MYTABLE->MYFIELD1
SKIP
ENDDO


...then it will be slower. To take advantage of the SQL engine's
speed, you need to craft and issue queries and handle the result-set
returned. Further, you should limit the result-set to just the
columns and tuples you are interested in. You should also consider
adding indexes on the server
side, storing procedures and creating views.

As an example, convince a colleague of the benefits of shifting to
SQL, I wrote a little program which ported one of our smaller tables
into PostgreSQL.

I then supplied two additional programs which ran a simple query
against the database. One ran against the SQL server, the other ran
against the DBF table. The DBF app showed the results of performing
the same 'query' in different ways: Using the COUNT statement, using
a FILTER statement, and using a specially crafted index. The first
application ran against the SQL server without indexes.

The "query's" were run 30 times in each test to produce different
results (i.e. different parameters). The results are startling:

DBF tests:
<<<<<<<<<<<<< USING COUNT >>>>>>>>>>>>>>>>>
Group COI TPO TPFT COMP TOTAL
------+-----------+----------+----------+----------+----------
1- 4 | 30 949 6 913 1898
5- 9 | 19 696 10 710 1435
10-14 | 3 0 0 9 12
15-30 | 0 0 0 0 0
31- | 8 88 0 211 307
------+-----------+----------+----------+----------+----------
Total | 60 1733 16 1843 3652

Time: 5.30
Avg : 0.18

<<<<<<<<<<<<< USING FILTER >>>>>>>>>>>>>>>>>
Group COI TPO TPFT COMP TOTAL
------+-----------+----------+----------+----------+----------
1- 4 | 30 949 6 913 1898
5- 9 | 19 696 10 710 1435
10-14 | 3 0 0 9 12
15-30 | 0 0 0 0 0
31- | 8 88 0 211 307
------+-----------+----------+----------+----------+----------
Total | 60 1733 16 1843 3652

Time: 1.14
Avg : 0.04

<<<<<<<<<<<<< USING INDEX with SKIP CONDITION >>>>>>>>>>>>>>>>>
Group COI TPO TPFT COMP TOTAL
------+-----------+----------+----------+----------+----------
1- 4 | 30 949 6 913 0
5- 9 | 19 696 10 710 0
10-14 | 3 0 0 9 0
15-30 | 0 0 0 0 0
31- | 8 88 0 211 0
------+-----------+----------+----------+----------+----------
Total | 60 1733 16 1843 0

Time: 0.23
Avg : 0.01

..and the SQL test:


<<<<<<<<<<<<< USING SQL >>>>>>>>>>>>>>>>>
Group COI TPO TPFT COMP TOTAL
------+-----------+----------+----------+----------+----------
1- 4 | 30 949 6 913 1898
5- 9 | 19 696 10 710 1435
10-14 | 3 0 0 9 12
15-30 | 0 0 0 0 0
31- | 8 88 0 211 307
------+-----------+----------+----------+----------+----------
Total | 60 1733 16 1843 3652

Time: 0.28
Avg : 0.01

Notice that the SQL test was quite favourable in comparison to the
INDEX test on the DBF. Now for the interesting bit...

The SQL server runs on a remote machine, the DBF table was local.
When I moved the DBF to the server, the INDEX test took twice as long,
so in that event the SQL method outperformed the DBF. Just to really
ram this home, the DBF was opened EXCLUSIVE. If it had been opened
SHARED, and additional users were accessing it, then the duration for
the INDEX test on the DBF would climb to around 1.2 seconds.

If I then quadrupled the size of the data, the SQL result only
increased in duration by 0.02 seconds, whereas the shared remote
indexed DBF operation doubled.

FYI, the SQL query returned a single tuple with a single column:

SELECT count(*) FROM rentorde WHERE _group >= $1 AND _group <= $2 AND
_cover like '$3';

(where $1, $2, $3 are replaced with absolute values).

Accessing SQL servers through the database language layer of Clipper
or ilk systems is a botch-job only good for migrating code (that
should in any case be immediately re-written for proper SQL paradigm).

Regards,
-- Phil Ide
Eagles may soar, but weasels aren't sucked into jet engines.
<END QUOTE>

David A. Smith

FP

unread,
Feb 1, 2010, 9:42:54 AM2/1/10
to

> Accessing SQL servers through the database language layer of Clipper
> or ilk systems is a botch-job only good for migrating code (that
> should in any case be immediately re-written for proper SQL paradigm).

I completely agree with you. And it must be said that you can get better
results on a SQL server just upgrading ONE server and not all the
clients. Neverthless, your tests seem to be tailored for a SQL server...
Probably a more correct test would include some INSERTs, UPDATEs and
DELETEs.... I believe SQL server would win again but not with so big
difference...

And to have a more correct test you should probably also include Letodb
and similars in the mix

xProgrammer

unread,
Feb 2, 2010, 6:44:39 AM2/2/10
to
Speed depends upon your environment, the logic of what you need to do,
how efficiently you code it, and the number of concurrent users etc. If
we are talking about a multi user system then traditional DBF access will
use a lot of bandwidth and perform slowly across a slow WAN link. SQL
per se is not the solution (in fact SQL by its very nature introduces
overheads / inefficiencies ) but the client server architecture that SQL
data base back ends come with ( used appropriately ) can provide a
dramatic improvement. However you can run such an architecture without
necessarily using MySQL or other SQL based data base. I think LetoDB is
one such alternative, in my case I wrote my own client server data base
back end in xHarbour using dbf tables. Queries are essentially pre-
compiled on the data base back end, communication is by sockets (IP),
data is returned in serialised form as an array of property name / value
pairs which can be used to update an appropriate object directly using
__ObjSetValueList(). Once set up this is simpler and more efficient than
using an SQL based data base back end, but does not (at least without
extra work) provide transaction control, ad hoc queries etc.

The above is a bit of a go it alone approach and I would think that most
people who need efficient data access across slow connections would go to
a well known SQL back end and get a performance boost from its client
server architecture (although the SQL part is actually less efficient
than hand coded queries). And certainly, if you do follow that route, it
is highly desirable to recode your data access logic with SQL in mind.

Whilst I have personally written some quite large projects that used SQL
based back ends I find SQL ugly, restrictive and feel that it runs
counter to OO techniques.

Enrico Maria Giordano

unread,
Feb 2, 2010, 6:52:36 AM2/2/10
to

"xProgrammer" <no...@nowhere.org> ha scritto nel messaggio
news:69OdnWXr1eS6jfXW...@giganews.com...

>I find SQL ugly, restrictive and feel that it runs counter to OO
>techniques.

I agree. It even runs counter to structured programming principles.

EMG

--
EMAG Software Homepage: http://www.emagsoftware.it
The EMG's ZX-Spectrum Page: http://www.emagsoftware.it/spectrum
The Best of Spectrum Games: http://www.emagsoftware.it/tbosg
The EMG Music page: http://www.emagsoftware.it/emgmusic


FP

unread,
Feb 2, 2010, 8:50:03 AM2/2/10
to
xProgrammer ha scritto:

> I think LetoDB is
> one such alternative, in my case I wrote my own client server data base
> back end in xHarbour using dbf tables. Queries are essentially pre-
> compiled on the data base back end, communication is by sockets (IP),
> data is returned in serialised form as an array of property name / value
> pairs which can be used to update an appropriate object directly using
> __ObjSetValueList(). Once set up this is simpler and more efficient than
> using an SQL based data base back end, but does not (at least without
> extra work) provide transaction control, ad hoc queries etc.

You may have a look at recent Harbour NETRDD that can serve two pourpose:

- it is a data transport layer that connects a client to a server where
the dbf resides, this transport layer is used instead of SMB/NFS, it's
quicker and can be encrypted - the server is not a sql or letodb
equivalent, it just moves dbf records in a faster and more secure way
between server and client
- it is possible to have the server run clipper code; in this case all
the work is done on the server - again, it is not a sql or letodb
equivalent, it's more a RPC call


> Whilst I have personally written some quite large projects that used SQL
> based back ends I find SQL ugly, restrictive and feel that it runs
> counter to OO techniques.


I don't completely agree on the fact that sql is slower than dbf... they
both have pros and cons. And of course SQL is not OO, but I see no
problem in this... or should I ?


xProgrammer

unread,
Feb 3, 2010, 6:16:57 AM2/3/10
to
On Tue, 02 Feb 2010 14:50:03 +0100, FP wrote:


>
> You may have a look at recent Harbour NETRDD that can serve two

> purpose:
>

NETRDD might just be a little along the lines of what I do. I use IP
sockets, the client passes an integer and an array of lookup values (may
be empty even for some queries). The server receives this, performs the
required query and passes the result back using HB_Serialise() So there
is some compression and what is passed back is immediately usable. The
server is running xHarbour code (as, of course, is the client).

> - it is a data transport layer that connects a client to a server where
> the dbf resides, this transport layer is used instead of SMB/NFS, it's
> quicker and can be encrypted - the server is not a sql or letodb
> equivalent, it just moves dbf records in a faster and more secure way
> between server and client
> - it is possible to have the server run clipper code; in this case all
> the work is done on the server - again, it is not a sql or letodb
> equivalent, it's more a RPC call
>

> I don't completely agree on the fact that sql is slower than dbf... they
> both have pros and cons.

They certainly have pros and cons. SQL is really designed to enable ad
hoc reporting, but there are arguably better solutions for this, although
none would recognise my back end. Well written [x]Harbour code should be
able to out perform SQL (and provide more flexibility) for the data
access part of the process. Having said that, unless you use some client
server scheme with [x]Harbour, using an SQL based server will provide
better performance, but this is achieved because of the architecture and
in spite of, rather than because of, using SQL.

>And of course SQL is not OO, but I see no
> problem in this... or should I ?

I guess the answer to that depends upon the type of code you are writing
and the type of data structures you want / need to support. For me it
represents an issue.

Regards
xProgrammer

0 new messages