We have running above mentioned version of sql anywhere with the size of 6
GB. Now we have facing slow performance problem and day by day it is
increasing.
Please some body help me that what hardware server configuration is required
to handle this problem. Our database administrator fully optimized the
database parameters but no change in performance.
regards.
adnan jamil.
http://iablog.sybase.com/paulley/whitepapers
specifically, the papers on "Capacity Planning with SQL Anywhere" and
"Diagnosing Application Performance Issues with SQL Anywhere", may be of
use. It may be that additional hardware resources are required for your
application, but I would verify that that is the case before replacing
your server machine with another.
Glenn
--
Glenn Paulley
Director, Engineering (Query Processing)
Sybase iAnywhere
Blog: http://iablog.sybase.com/paulley
EBF's and Patches: http://downloads.sybase.com
choose SQL Anywhere Studio >> change 'time frame' to all
To Submit Bug Reports: http://case-express.sybase.com
SQL Anywhere Studio Supported Platforms and Support Status
http://my.sybase.com/detail?id=1002288
Whitepapers, TechDocs, and bug fixes are all available through the
Sybase iAnywhere pages at
http://www.sybase.com/products/databasemanagement/sqlanywhere/technicalsupport
>Our database administrator fully optimized the
>database parameters but no change in performance.
What do you mean by "database parameters"? If it refers to command
line options, please show us the command line.
Anyway, there is usually (always?) something that can be done before
investing in new hardware... and new hardware may not help at all.
This chapter of the Version 9 Help has a LOT of valuable material,
including a list of tips:
ASA SQL User's Guide
Monitoring and Improving Performance
Chapter 10 of my book (which was written for version 9) may also help:
http://www.risingroad.com/SQL_Anywhere_Studio_9_Developers_Guide.html
10 TUNING . . . . . . . . . . . . . . . . . . . . . . . 399
10.1 Introduction . . . . . . . . . . . . . . . . . . . 399
10.2 Request-Level Logging . . . . . . . . . . . . . . . 400
10.3 Index Consultant . . . . . . . . . . . . . . . . . 408
10.4 Execution Profiler . . . . . . . . . . . . . . . . 413
10.5 Graphical Plan . . . . . . . . . . . . . . . . . . 416
10.6 File Fragmentation and Reorganization . . . . . . . 421
10.6.1 File Fragmentation . . . . . . . . . . . . . . 421
10.6.2 Table Fragmentation . . . . . . . . . . . . . 423
10.6.3 Table Reorganization . . . . . . . . . . . . . 428
10.6.4 Index Fragmentation . . . . . . . . . . . . . 429
10.6.5 Index Reorganization . . . . . . . . . . . . . 432
10.6.6 Database Reorganization with Unload/Reload . . 433
10.7 CREATE INDEX . . . . . . . . . . . . . . . . . . . 437
10.8 Database Performance Counters . . . . . . . . . . . 443
10.9 Tips and Techniques . . . . . . . . . . . . . . . . 446
10.10 Chapter Summary . . . . . . . . . . . . . . . . . 453
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Your build of Version 9 contains the wonderful LogExpensiveQueries
option described here: http://www.sybase.com/detail?id=1055891
You may also be interested in Foxhound, particularly these features:
Database-level Curiosities
Table-level Curiosities
Database Monitor, including peaks and connections
http://www.risingroad.com/foxhound/foxhound1_help/foxhound_introduction.html#highlights
Since all we know about your database is "6 GB", the only answer that
can be given about hardware requirements is "any modern desktop PC
should handle it with ease, starting at around $400 for the whole
system."
http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=4361692&CatId=2630
Tell us more about your environment, your applications, your hardware
and your database, and maybe someone will have more *specific*
suggestions. We are here to help!
Breck
On 13 Apr 2009 00:10:35 -0700, "Adnan Jamil" <adnan...@nji.com.pk>
wrote:
--
Breck Carter http://sqlanywhere.blogspot.com/
RisingRoad SQL Anywhere and MobiLink Professional Services
breck....@risingroad.com
Have you determine when you encounter slow performance problem? Hopefully,
once you turn the server on you did not start having issue with performance.
You might want to know what is running and see if there are any blocking
occured.
Download this doc (Steps in Diagnosing Application Performance Problems)
from Sybase:
http://iablog.sybase.com/paulley/wp-content/uploads/2008/07/perf.pdf
Hope this TSQL help you:
SELECT Activity.number AS Number,
CONNECTION_PROPERTY ( 'Name', Number ) AS connection_name,
Activity.userID,
IF LENGTH(trim(CAST(Activity.lastReqTime AS CHAR(200)))) > 0 THEN
DATEDIFF(ms,Activity.lastReqTime,now(*))/1000.0
ELSE
0
ENDIF AS duration_sec,
Activity.lastReqTime,
CONNECTION_PROPERTY( 'TempFilePages', Number ) AS TempFilePages,
CONNECTION_PROPERTY( 'ApproximateCPUTime', Number ) AS
ApproximateCPUTime,
CONNECTION_PROPERTY( 'PrepStmt', Number ) AS PrepStmt,
CONNECTION_PROPERTY( 'LockCount', Number ) AS LockCount,
CONNECTION_PROPERTY( 'BlockedOn', Number ) AS BlockedOn,
CONNECTION_PROPERTY( 'DiskRead', Number ) AS DiskRead,
CONNECTION_PROPERTY( 'DiskWrite', Number ) AS DiskWrite,
CONNECTION_PROPERTY( 'LogWrite', Number ) AS LogWrite,
CONNECTION_PROPERTY( 'NodeAddress', Number ) AS IPAddr,
CONNECTION_PROPERTY( 'ReqType', Number ) AS requestType,
CONNECTION_PROPERTY( 'ReqStatus', Number ) AS ReqStatus,
Activity.lastStatement
FROM sa_conn_activity() Activity
WHERE requestType NOT IN ('COMMIT', 'CURSOR_CLOSE','PREFETCH') AND
LENGTH(Activity.lastReqTime) > 0 AND ReqStatus <> 'Idle' --AND duration_sec
> 900
ORDER BY Activity.lastReqTime;
Good Luck,
Rachan Terrell
"Adnan Jamil" <adnan...@nji.com.pk> wrote in message
news:49e2e56b@forums-1-dub...
Here is command line parameters
-n gl -x TCPIP -c 1G -gp 16384 -p 8192 -u d:\shma\shma.db -n shma -c 1G
Server configuration is
HP Prolient 370ML
Windows 2003 Server
Dual Xeon processor
2 GB Physical memory
150 GB free space
I hope these informations are more descriptive and help full to analyze
problem.
Regards
Adnan Jamil
"Breck Carter [TeamSybase]" <NOSPAM__br...@gmail.com> wrote in
message news:mt66u45ejqgjeht0i...@4ax.com...
-c 1G is using only half of the RAM, indicating that you may be
sharing the server with other applications; maybe they are the reason
your database seems slow.
If it is a dedicated machine try -c 90P. You can never have too much
RAM. Note that in Version 9 the -ca default is effectively 256M so the
cache won't automatically grow beyond -c 1G. That is a bad -ca
default, and it was changed in Version 10 (bad defaults are rare with
SQL Anywhere but that was one).
Adding RAM and switching to AWE caching might help. Another
recommendation: Upgrading to SQL Anywhere 10.0.1 or better yet 11.0.1.
Also, if you switch to a 64-bit operating system then the RAM cache is
unlimited and you don't have to muck about with AWE.
-u tends to confirm that there are other applications taking up a lot
of resources. The Help says this: "If the server is running on a
dedicated machine, you should not use the -u option, as the database
cache itself is generally more efficient. You may want to use the -u
option if the server is running on a machine with several other
applications (so that a large database cache may interfere with other
applications) and yet IO-intensive tasks are run intermittently on the
server (so that a large cache will improve performance)."
-gp 16384 indicates you are starting multiple databases with varying
page sizes, and that the page size of d:\shma\shma.db is smaller than
16K (so -gp is required to force a larger RAM page size so that later
databases can be started). First of all, that wastes a lot of RAM; if
shma.db has a page size of 4K then each disk page from that file that
is loaded into RAM will waste 12K of the RAM page it is stored in.
Second, the 16K page size is generally NOT recommended, nor is 32K; it
is usually best to stick to 4K and 8K pages, with 4K being the best
all-round choice until you have reasons to switch.
Does shma.db actually have a 16K page size? Is it the only database
running on the server? (if so, -gp is unnecessary). Why did you pick a
16K page size?
=====
How did you choose those command line options? By reading the
description of each option in the Help and deciding which ones to use?
If so, that is NOT the best way to proceed. Command line options
should be used when you have clear evidence that the default values
are not the best choice.
Is your disk drive fragmented?
Is your database internally fragmented?
How many users are connected at one time?
How many connections are busy at the same time?
How fast does the transaction log file grow?
How many rows are there, in your biggest tables?
Do you have any long-running queries?
Is the database temporary file heavily used and very large?
... the list of questions goes on and on and on... :)
Breck
On 13 Apr 2009 22:54:38 -0700, "Adnan Jamil" <adnan...@nji.com.pk>