The xdb files for these databases have been modified extensively.
Please advise,.
Murray
Are you using the original database as the basis for compare or using an
Archived Model?
If you are using the database, can you try:
1) Using an archived model - see if performance is better this way
2) Reverse engineer the database directly to a new model, see how long that
takes.
What I am thinking is if we can isolate the issue to the reading of the
database catalog or the creation of the DDL itself. (1) above will prove
whether the DDL creation is slow or not. (2) will prove whether reading the
catalog is slow or not.
If both are fast alone, then the performance difference might be in the
compare itself.
Let me know if this helps,
David.
"Murray Sobol" <murray...@dbcsmartsoftware.com> wrote in message
news:l71gi5p927bqjgqpb...@4ax.com...
If your dbeng/dbsrv is on Windows, you can also try to defragment the
database file using the Windows defrag utility.
Also make sure you have started the engine with a decent sized cache using
the -c command line parameter or -cl and -ch parameters. SQL Anywhere will
dynamically resize its cache while running and this can take some time.
Setting a size can also help.
If you are also running multiple databases of different page sizes in the
same engine this can also affect performance. Make sure all databases are
of the same page size, or run the repository database in its own dedicated
engine.
If the repository database server resides on the same machine as PD, also
make sure that you have the 'shared memory' checked on, and that you are not
starting and stopping the database on first connect/last connect. Along
with a large transaction log, this can be a huge performance hit as stops
and restarts require rollforward/rollback through the log on each restart as
well as not keeping pages in cache.
Chris
"David Dichmann [Sybase]" <dic...@sybase.com> wrote in message
news:4b30d5bf@forums-1-dub...