--
Kevin Powick
:
:Any opinions on perhaps which one of these three vendors (or others)
:provide the "best" set of MySQL components with regard to features,
:performance, stability and support?
Kevin,
I'd also like the answer to this question. I've been playing
with several MySQL components over the past 2 weeks and I'm still not
100% satisfied with any of them. Many of them don't come with much
support or documentation. There is also Zeos and DirectSQL from
http://sourceforge.net/projects/directsql
Doesn't anyone use Delphi with MySQL??? What is the fastest,
most reliable, best supported MySQL components for Delphi?
Brent
> Kevin,
> I'd also like the answer to this question. I've been playing
> with several MySQL components over the past 2 weeks and I'm still not
> 100% satisfied with any of them. Many of them don't come with much
> support or documentation. There is also Zeos and DirectSQL from
> http://sourceforge.net/projects/directsql
Brent,
After playing with a few of them, I think I'm going with CoreLab and
these are my reasons:
Zeos
----
Zeos seems to have faded. At one time I saw lots of postings about it,
but not within the last year.
SciBit
------
SciBit only supports MySQL 3.x. This caused strange results when I was
running their components against the latest 4.x (stable) version of
MySQL. Mostly field type errors.
The author told me that an updated version will be coming out within
about 3 weeks, but the fact that MySQL 4.x has been out for many months
indicates to me that supporting this component set is not a top
priority for SciBit. IIRC, this set was born out of internal use, and
its commercialization was an afterthought.
DirectSQL
---------
DirecSQL is fast, but is not a set of components or objects that have
TDataSet descendants. So, you can read a record set, but not bind it to
any data controls. Also, I have yet to figure out how to write data
back to the server. I didn't look at this code too long.
No real docs and the project is a "one man show", though the author
seems to have teamed-up with MicroOLAP to incorporate his work with
their offering.
The fact that it doesn't rely on libmysql.dll seems attractive at
first, but then you realize that you would not benefit from updates,
fixes, and optimizations that MySQL would provide to that library. You
would have to wait for the author to update his code.
MicroOLAP
---------
MicroOLAP's components gave me some strange results when I was running
tests. I.e. I would do a Select and process the results, however,
there was always one record which was left unchanged. I also ran into
some other oddities.
It also doesn't rely on libmysql.dll, but I think that's because
they've incorporated DirectSQL. Funny enough, I ran into a
compatibility problem between DirectSQL and MicroOLAP. Seems that one
library is more recent than the other, but I'm not sure which. Delphi
just complained about components being compiled with different
versions.
Another thing that bothered me was their support forums. Too many 2nd
and 3rd posts from people begging/pleading for answers to their
questions. Too many questions with no replies. Even some complaints
that tech support was only answering the "easy" questions. Maybe their
forums are not "officially" supported and paying customers get priority
e-mail support? Regardless, it made me nervous.
CoreLab
-------
The components worked well for me, didn't cause any errors, and seemed
quite fast. CoreLab is also supporting .NET which indicates that they
are keeping up with industry trends, but the last thing that made me go
with CoreLab is I see that RemObjects is using it for MySQL access in
their Data Abstract product. Those guys create top notch software and
tend to work with other vendors that do the same.
So there you have it. It looks like I'll be going with CoreLab.
I would be interested in hearing about your experiences.
--
Kevin Powick
(The times below do not include the time to open the query.)
MicroOlap traversed the query at around 40,000 records/second which
was somewhat disappointing because that was what I was getting from
non-MySQL databases. Why switch to a fast database like MySQL if I'm
not getting better performance?
Yesterday I discovered SciBit traversed the query at an amazing
139,000 rows per second which was blindingly fast.
Tonight CoreLab's query component traversed the same table at, are you
ready for this? 1.6 million rows per second. My jaw just about hit the
keyboard. Of course I didn't believe it at first. I shut down the
MySQL server and restarted it, thinking it was getting benefitting
from the cache. Re-running it gave me the same results. I will have to
test the cache updates to see how they stand up. BTW, I threw the
entire 2.7 million row table at it, and it didn't gobble up all the
RAM like MicroOlap and SciBit did. To its credit SciBit kept the
application running with only 5-10 MB of physical memory left (from
700MB free when I started the query). MicroOlap didn't fair as well.
CoreLab had about 110MB free so the query of this size didn't affect
the machine's performance at all.
I also like the CoreLab's FetchRows property that will open a large
query immediately by fetching only a few rows at a time (it runs
considerably slower at 20k rows per second, but more than adequate for
grids or graphs). The documentation is also first rate and a lot of
work went into the examples. I can see CoreLab put a lot of effort
into their product and they're the clear winner so far. I'll play
around with the components a few more days to see how they stand up.
If the rest of the components work as well as their query component,
I'll be a happy camper. :)
Brent
Brent,
Thanks for the additional info.
> Why switch to a fast database like MySQL if I'm
> not getting better performance?
I was pretty amazed at MySQL's retrieval speed in general. I
originally loaded it on RHL 8.0, running on an old P133 with only 128MB
of RAM and couldn't believe how fast the data was coming back.
BTW, since you're using MySQL, do yourself a favour and buy MySQL
Manager 2.x from EMS-HiTech
http://ems-hitech.com/mymanager
It's an outstanding product. Free trial available.
While MySQL is quick to retrieve data, I found it about the same speed
for updating data as some of the other DBs that I use. No big worries
though because, for my purposes, quick retrieval is much more
important.
--
Kevin Powick
:In article <f744ivgsumcc1at92...@4ax.com>,
:bdgr...@NOSPAMmailbolt.com says...
:
:Brent,
:
:Thanks for the additional info.
:
:> Why switch to a fast database like MySQL if I'm
:> not getting better performance?
:
:I was pretty amazed at MySQL's retrieval speed in general. I
:originally loaded it on RHL 8.0, running on an old P133 with only 128MB
:of RAM and couldn't believe how fast the data was coming back.
Did you notice Group By's in MySQL 4 are extremely fast. What takes
20-30 minutes on a fast 3rd party Delphi database, now takes only 5
seconds with MySQL. That's 360x faster.
:
:BTW, since you're using MySQL, do yourself a favour and buy MySQL
:Manager 2.x from EMS-HiTech
:
:http://ems-hitech.com/mymanager
:
:It's an outstanding product. Free trial available.
I agree. It has quite a few competitors, but no equal.
:While MySQL is quick to retrieve data, I found it about the same speed
:for updating data as some of the other DBs that I use.
Ahhh, well, yes and no.<g> How about if you cheat?<g> If you use the
LOAD DATA INFILE to load ASCII delimited data, it is extremely fast.
On my machine I'm loading 1.5 to 2.5 million rows per minute into the
table. That is pretty fast considering other Delphi databases are
lucky to get that many rows loaded in an hour let alone a minute.
There is also an IGNORE option that will replace existing data (based
on primary key?). So I'm thinking instead of adding/updating a million
rows using INSERT/UPDATE, I'll simply write the fields out to a text
file (which is extremely fast in Delphi) and use LOAD DATA INFILE to
update and insert data in one operation. Since I have exclusive access
to the table during this operation, I don't see a problem. I will also
have to investigate cached updates with CoreLib for those times when
LOAD DATA INFILE is overkill.
Brent
> Kevin Powick <nos...@nomail.com> wrote:
> :While MySQL is quick to retrieve data, I found it about the same speed
> :for updating data as some of the other DBs that I use.
> Ahhh, well, yes and no.<g> How about if you cheat?<g> If you use the
> LOAD DATA INFILE to load ASCII delimited data, it is extremely fast.
Well, if I could cheat I would <g>. Thanks for pointing that out.
From a daily operations standpoint, how many times are you going to
load your 2 million records? Something like that would only happen
once for us on an initial system load/conversion. Handy none the less.
> On my machine I'm loading 1.5 to 2.5 million rows per minute into the
> table. That is pretty fast considering other Delphi databases are
> lucky to get that many rows loaded in an hour let alone a minute.
True, but let's not forget that MySQL doesn't provide built-in
referential integrity, which is extra overhead for databases such as
IB. Working with transactions also imposes a performance hit.
Regardless, MySQL is bloody fast, even with lower-end hardware. As for
referential integrity, I'm quite used to maintaining it at the
application level anyway.
> I will also
> have to investigate cached updates with CoreLib for those times when
> LOAD DATA INFILE is overkill.
Unless data transfer/communication is an issue, such as with remote
clients on low-bandwidth connections, why bother with cached updates
and the potential hassle of conflict resolutions?
--
Kevin Powick
Most DBs are fast if we are doing simple single plane rows.
But when loading 3K tables with several multikeyed indexes, referential
integrity, views, rules and sequences MySQL could become the virtual turtle
of the DB world.
When we quote loading statistics, IMHO one needs to declare the structure
that is being loaded as well as the row size so that a valid comparison can
be infurred.
Hal Davison
Davison Consulting
:In article <d3m5ivghi7q75i7ql...@4ax.com>,
:bdgr...@NOSPAMmailbolt.com says...
:
:> Kevin Powick <nos...@nomail.com> wrote:
:
:> :While MySQL is quick to retrieve data, I found it about the same speed
:> :for updating data as some of the other DBs that I use.
:
:> Ahhh, well, yes and no.<g> How about if you cheat?<g> If you use the
:> LOAD DATA INFILE to load ASCII delimited data, it is extremely fast.
:
:Well, if I could cheat I would <g>. Thanks for pointing that out.
:
:From a daily operations standpoint, how many times are you going to
:load your 2 million records? Something like that would only happen
:once for us on an initial system load/conversion. Handy none the less.
For me it could happen as often as twice a day, but probably once or
twice a week. We're running statistical analysis on data so it is done
every time we start a new run.
:
:> On my machine I'm loading 1.5 to 2.5 million rows per minute into the
:> table. That is pretty fast considering other Delphi databases are
:> lucky to get that many rows loaded in an hour let alone a minute.
:
:True, but let's not forget that MySQL doesn't provide built-in
:referential integrity, which is extra overhead for databases such as
:IB. Working with transactions also imposes a performance hit.
With InnoDb you of course get RI and transactions. I noticed a simple
query on an InnoDb table is about 10x slower than a MyISAM table. So
instead of the query finishing in 0.06 seconds it is 0.60 seconds.
This is a moot point unless you have a lot of users quering the table.
For my webserver app I will need to use MyISAM for read mostly tables
and use InnoDb just for tables that are being updated. At least with
MySQL the developer has a choice.
:
:Regardless, MySQL is bloody fast, even with lower-end hardware. As for
:referential integrity, I'm quite used to maintaining it at the
:application level anyway.
:
:> I will also
:> have to investigate cached updates with CoreLib for those times when
:> LOAD DATA INFILE is overkill.
:
:Unless data transfer/communication is an issue, such as with remote
:clients on low-bandwidth connections, why bother with cached updates
:and the potential hassle of conflict resolutions?
Good question. I'm trying to speed up updates to a CoreLab's
query/table. The typical update loop:
Loop
Edit;
change record;
Post;
Next;
EndLoop
is running at around 912 updates per second which is about 3x slower
than other databases.
Another set of MySQL components I tested, I can't remember which one
it was (but not CoreLab), was doing only 20 updates per second! As the
number of rows in the query/table increased, the updates got slower. I
only had 1000 rows in the table and got a measly 20 updates per
second. Maybe it was refreshing the dataset after every post? I never
did figure out why. As soon as I saw CoreLab's components I ditched
the other set of components and never looked back.<g>
Even though Corelab's components worked much faster, I thought using
cached updates would offload the work from the dataset that I'm
traversing, and the updates would be done in batches. The server would
get a batch of updates at one time, instead of thousand individual
updates. So table locking should be reduced and it should be less work
for the server. Cached updates are also suppose to be less work for
the server than using transactions.
With CoreLab's components cached updates don't seem to have any effect
(so far). This is still work in progress and I have more testing to
do. As far as conflict resolutions, there won't be any because this is
a batch run with no one else using the table. I will try and add an
exclusive lock on the table to see if that speeds things up any.
If it ends up still being too slow (say < 5000 rows/second), I can
always resort to LOAD DATA INFILE which is over 25,000 rows/second. I
can get away with this because it is a batch run and if anything goes
wrong, I drop the table and start again. Also there is no one else
using the table which makes this quite feasible.
Brent
> When we quote loading statistics, IMHO one needs to declare the structure
> that is being loaded as well as the row size so that a valid comparison can
> be infurred.
While this is true, I think the assumption in this thread is the we
(the original posters) are drawing our comparisons based on our actual
experiences with MySQL compared to the other DBs that we actually use.
For us, the conclusion that MySQL is fast is valid for our particular
situations.
As already stated, MySQL does not support all features found in other
DBs, such as referential integrity, which would impact performance.
More extensive benchmarking of MySQL performance can be found at:
http://www.mysql.com/information/benchmarks.html
--
Kevin Powick
:
:"Kevin Powick" <nos...@nomail.com> wrote in message
:news:MPG.198cb6181...@newsgroups.borland.com...
:> In article <d3m5ivghi7q75i7ql...@4ax.com>,
:> bdgr...@NOSPAMmailbolt.com says...
:> Regardless, MySQL is bloody fast, even with lower-end hardware. As for
:> referential integrity, I'm quite used to maintaining it at the
:> application level anyway.
:>
:> > I will also
:> > have to investigate cached updates with CoreLib for those times when
:> > LOAD DATA INFILE is overkill.
:>
:> Unless data transfer/communication is an issue, such as with remote
:> clients on low-bandwidth connections, why bother with cached updates
:> and the potential hassle of conflict resolutions?
:
:Most DBs are fast if we are doing simple single plane rows.
I wish that was always the case. T've encountered a few databases that
will slow down considerably as more rows are added to the table. The
database may start off blazingly fast but as the table gets larger
(over 100,000 rows), the import slows down and indexed based queries
will also slow down. After 2 million rows queries may slow down by a
factor of 10 even though it is returning the same number of rows (<
1500) and the column is indexed.
MySQL excels at fast queries and can handle over a hundred million
rows of data with ease. In my apps, Group By's in MySQL blows the
doors off of other databases I've tried, by as much as 360x faster.
That's nothing to sneeze at. These comparisons use the same data, same
table structures and same indexes.
:But when loading 3K tables with several multikeyed indexes, referential
:integrity, views, rules and sequences MySQL could become the virtual turtle
:of the DB world.
Yes, if you load everything possble on the database server, it will
have to work harder for the same number of connected users. The
majority of corporate applications demand a lot of RI, triggers,
views, stored procedures etc. and for that people will choose MS SQL,
Oracle, and Cache. But to get the speed they need, they also have to
beef up the hardware considerably. If the customer has unlimited
amount of money and support staff, then they are better off with a
full featured DBMS.
I certainly don't recommend MySQL unless the application fits within
the capabilities of the database. In such situations, I don't think
anything comes close to matching MySQL's price/performance ratio. The
Innodb table structure that was introduced a couple of years ago
supports RI and transactions quite well and will help MySQL to grow
into other markets. Stored procedures, subselects, and views will be
coming in either 4.1 or 5.x.
:When we quote loading statistics, IMHO one needs to declare the structure
:that is being loaded as well as the row size so that a valid comparison can
:be infurred.
I was comparing it to other databases that I have been using. It uses
the same table structure and the same data. The reason MySQL was
faster is its LOAD DATA INTO sql statement that the other databases I
tested did not have. MySQL was able to "batch import" the data at a
very high rate of speed. If I had to resort to adding rows one at a
time into the table like I did for the other databases, then MySQL
would have been slower. Is it a fair comparision? No because it is
comparing two distinct ways of importing data. However that point
becomes moot because what matters is the time it takes to load the
data, not how it was achieved. If other databases lack this LOAD DATA
INTO feature, then their import benchmark will have to fall back to
what they do support.
Brent
Brent, you are quite correct.
We use PostgreSQL for our projects running on a SuSE 8.2 with a 100BaseT
network supporting Win XP Pro clients in a financial wholesale/retail
petroleum distribution application.
We have some decent size databases but nothing compaired the the volume you
were mentioning.
Keep up the good work. It's a joy to read about your experiecnces with
MySQL.
Hal Davison
Davison Consulting
I've been using MicroOLAP and have been a bit bothered as well by
their lackluster support. However, I don't see ANY support options on
CoreLab's site, with the exception of a support e-mail on the 'Contact
Us' page. That's not too impressive, either. I personally like to see
a vendor have a newsgroup.
--Bruce
Bruce Vander Werf
bru...@hotmail.com
> I've been using MicroOLAP and have been a bit bothered as well by
> their lackluster support. However, I don't see ANY support options on
> CoreLab's site, with the exception of a support e-mail on the 'Contact
> Us' page.
Good point. I guess I didn't even notice that because I didn't have
any problems with their product :-)
I started looking at MicroOLAP's NGs because of the difficulties I ran
into.
> I personally like to see
> a vendor have a newsgroup.
I do too. It probably even helps reduce the load in their support
department because questions may be answered by other users or the fact
the answer to their question may already be posted in the NG.
--
Kevin Powick
after this thread, I went with CoreLab.
And I'm mightily disappointed. I don't use
dataset aware components, so all that stuff
is needless overhead for me.
After a little mucking around, I got it working
with my base provider independence framework, and
passing all my DUnit tests.
But the benchmarks picked up a problem. It's slow -
very slow. Fetch speed is ok. I don't test join speed
yet. But the test that gives me a problem is this one:
part 1:
run 100 insert statements inserting a simple record
the sql changes each time
part 2:
run a single sql statement 100 times with different
parameters each time, using bound parameters
results:
Part 1 Part 2
SQL Server 307 105
DBIsam 1753 47
Firebird 393 56
MySQL 4409 4747
all times in milliseconds
Firebird is using IBX as released with D5 and the
current Stable release of Firebird. DBIsam is an
old version not running in client/server mode. SQL
Server is MSSQL 2000 using ODBC (ODBC Express v5).
All are local to my development machine, so it's a
fairly equal test of my provider layers. And my SQL
layer is a simple mapping onto MySQL - it doesn't
do anything else
So the performance is shocking, and binding is *worse*.
So if all you are interested in is fetch speed, you might
think that MySQL with CRLab stuff is fast. But if you call
execute very often - and who doesn't - you might think
again. Especially if you like to use parameters to work
around the inherent delays in SQL statement parsing
Grahame
p.s. for myDAC users, I am using TMyCommand with current
source for testing this stuff, since I was adivsed
that TMyQuery was even slower, though the difference
is miniscule. And my MySQL version is the current
stable release
> after this thread, I went with CoreLab.
> And I'm mightily disappointed.
I guess you didn't read the thread too carefully then. I just read it
again and Brent clearly states that update speed with MySQL was not
outstanding, but this probably has little to do with CoreLab's
components.
Your own tests compare databases, not components, so I'm not sure it's
fair to say you are disappointed with CoreLab.
For me the focus/purpose of the thread was to determine the best set of
Delphi components to use with MySQL. I still believe that CoreLab's
are the best overall (performance, compatibility, support, etc).
If you find others, I would be very interested in hearing about them
and why you feel they are superior. I'm always looking to improve
performance.
--
Kevin Powick
> In article <3f442753$1...@melb-inet.dmz.kestral.com.au>,
> gra...@kestral.com.au says...
>
> > after this thread, I went with CoreLab.
> > And I'm mightily disappointed.
>
> I guess you didn't read the thread too carefully then. I just read
> it again and Brent clearly states that update speed with MySQL was
> not outstanding, but this probably has little to do with CoreLab's
> components.
>
> Your own tests compare databases, not components, so I'm not sure
> it's fair to say you are disappointed with CoreLab.
well, maybe MySQL is slower doing updates. or maybe it's slower
doing sql parsing (and I note that some published benchmarks don't
agree that MySQL is signficantly slow doing updates to the
degre I've observed). But there's really no excuse for repeated
inserts with bound parameters to be slower than not with bound
parameters.
As far as I can make out, This is a deficiency with the Corelab
stuff, since the raw mySQL api does support bound params
but the CoreLab component set doesn't make use of it
> For me the focus/purpose of the thread was to determine the best set
> of Delphi components to use with MySQL. I still believe that
> CoreLab's are the best overall (performance, compatibility, support,
> etc).
yes, I can't complain about support. The doco is a little bit of
a let down, but it is there, which isn't always the case.
I don't know about compatilibility - I make heavy use of
nested queries so I can only care about 4.1
> If you find others, I would be very interested in hearing about them
> and why you feel they are superior. I'm always looking to improve
> performance.
well, I would've used DirectSQL if it had supported bound parameters.
the Corelab stuff will get us going, but we will wrap the mysql dll
directly some time, when we really care about performance
Grahame
"Grahame Grieve" <gra...@kestral.com.au> wrote in message news:<3f442753$1...@melb-inet.dmz.kestral.com.au>...
> So if all you are interested in is fetch speed, you might
> think that MySQL with CRLab stuff is fast. But if you call
> execute very often - and who doesn't - you might think
> again. Especially if you like to use parameters to work
> around the inherent delays in SQL statement parsing
You are right. Recently we have noticed a large time delay on execution
complex queries with Length(SQL.Text) > 1000.
According to MySQL Server specific the delay was mostly noticeable on
insert/update BLOB fields.
Now we managed to optimize it and speed up query execution at several times.
Suppose, at the next MyDAC build these tests give quite different results.
Moreover, MyDAC 2.0 is coming with new features of MySQL Server 4.1 support,
such as:
- parameters binding;
- ability to connect without libmysql.dll client library;
- improved MySQL Embedded Server support;
- BDE to MyDAC Migration Wizard;
- New TMyServerControl and TMyLoader components;
- Support of compressing traffic.
Also another improvements are expected (Unicode support, preparing). But
they are wait for MySQL Server 4.1.1 release.
Best regards,
Vladimir Zheleznyak
> But there's really no excuse for repeated
> inserts with bound parameters to be slower than not with bound
> parameters.
> well, I would've used DirectSQL if it had supported bound parameters.
While bound parameters are available in MySQL 4.1, I thought that 4.1
was still in Alpha. AFAIK, the stable/production version (4.0.x) does
not support bound parameters.
DirectSQL looked interesting, but I didn't see the support or any
recent updates. It's a one man show. It also doesn't have any
descendants for binding to data aware controls. I know you are not
interested in this feature, but I need it.
--
Kevin Powick
"Kevin Powick" <nos...@nomail.com> wrote in message
> > well, I would've used DirectSQL if it had supported bound parameters.
If you would check the source code of Mysql you would see that altough the
idea of prepared statements has been documented and presented to the public,
it has not been yet fully implemented.
> DirectSQL looked interesting, but I didn't see the support or any
> recent updates. It's a one man show. It also doesn't have any
> descendants for binding to data aware controls. I know you are not
> interested in this feature, but I need it.
Just to clarify few things:
DirectSQL is not anymore a one man show. There is a new version to be
released pretty soon.
There have not been many updates lately as no problems have been reported
...
The new version is a complete redesign considering this architecture:
core engine (classes)
/ | \
Dataset descendant COM objects LIBMYSQL.DLL compatible API.
Without going into too many details currently i collaborate with many mysql
users in order to make/present this as the best possible solution. (And this
is NOT limited to just MicroOlap - which btw are really great guys.. I know
their newsgroups doesnt look very full of friendly chats but believe me they
really are the type who do it rather than just say they do it - if you would
check their "bug" reports .. most of them are unanswered in the
newsgroup,yet if you check in few days there has been a fix released).
As an additional note: DirectSQL would also be supported native on Delphi
.Net
If you would like to know even more details about the new version please do
not hesitate to contact me.
Best regards,
Cristian Nicola
> Just to clarify few things:
> DirectSQL is not anymore a one man show. There is a new version to be
> released pretty soon.
> The new version is a complete redesign considering this architecture:
> As an additional note: DirectSQL would also be supported native on Delphi
> .Net
>
> If you would like to know even more details about the new version please do
> not hesitate to contact me.
Thanks for the update Cristian. I will follow-up by e-mail as I am
interested in your product.
--
Kevin Powick
You are right. Recently we have noticed a large time
delay on execution complex queries with Length(SQL.Text)
> 1000.
According to MySQL Server specific the delay was mostly noticeable on insert/update BLOB fields. Now we managed
to optimize it and speed up query execution at several
times. Suppose, at the next MyDAC version these tests
give quite different results.
Moreover, MyDAC 2.0 is coming with new features of MySQL
Server 4.1 support, such as:
- parameters binding;
- ability to connect without libmysql.dll client library;
- improved MySQL Embedded Server support;
- BDE to MyDAC Migration Wizard;
- New TMyServerControl and TMyLoader components;
- Support of compressing traffic.
Also another improvements are expected (Unicode
support, preparing). But they are wait for MySQL
Server 4.1.1 release.
Best regards,
Vladimir Zheleznyak
"Grahame Grieve" <gra...@kestral.com.au> wrote:
>
> Hello,
>
>
> You are right. Recently we have noticed a large time
> delay on execution complex queries with Length(SQL.Text)
> > 1000.
> According to MySQL Server specific the delay was mostly noticeable on
> insert/update BLOB fields. Now we managed to optimize it and speed up
> query execution at several times. Suppose, at the next MyDAC version
> these tests give quite different results.
>
> Moreover, MyDAC 2.0 is coming with new features of MySQL
> Server 4.1 support, such as:
> - parameters binding;
> - ability to connect without libmysql.dll client library;
> - improved MySQL Embedded Server support;
> - BDE to MyDAC Migration Wizard;
> - New TMyServerControl and TMyLoader components;
> - Support of compressing traffic.
>
> Also another improvements are expected (Unicode
> support, preparing). But they are wait for MySQL
> Server 4.1.1 release.
>
great to hear this. when are you going to release?
Are you saying that you are holding your version 2 for
the mysql release of 4.1.1?
Grahame
"Grahame Grieve" <gra...@kestral.com.au> wrote:
>great to hear this. when are you going to release?
>Are you saying that you are holding your version 2 for
>the mysql release of 4.1.1?
Next MyDAC version will be available in the middle of September.
I cannot tell you exact if it will be MyDAC 2 or the next build of MyDAC 1.50.
> Are you saying that you are holding your version 2 for
> the mysql release of 4.1.1?
We haven't decided yet if it has sense to wait.
Best regards,
Vladimir Zheleznyak