Sybase FAQ: 1/19 - index

Skip to first unread message

David Owen

Apr 20, 2004, 9:44:59 AM4/20/04
Archive-name: databases/sybase-faq/part1
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Sybase Frequently Asked Questions

Sybase FAQ Home PageAdaptive Server Enterprise FAQAdaptive Server Anywhere FAQ
Replication Server FAQSearch the FAQ
Sybase FAQ

Main Page

* Where can I get the latest release of this FAQ?
* What's new in this release?
* How can I help with the FAQ?
* Who do I tell about problems in the FAQ?
* Acknowledgements and Thanks
* Hall of Fame
* Copyright and Disclaimer
* General Index

Main | ASE | ASA | REP | Search


Where can I get the latest release of this FAQ?

International Sybase User Group

The main page for this site is It is hosted
there by kind permission of the International Sybase User Group (http:// as a service to the Sybase community.

To get a text version of this FAQ:


If you want uncompressed versions of the various sections, they can be got
from ASE, ASA & REP.

To get the HTML for this FAQ:


Last major update: 21st February 2003.

Back to Top


What's new in this release?

Release 1.9

* Running multiple servers on a single server (UNIX and NT).

Back to Top


What's happening with the FAQ?

I have not had a lot of time to spend on the FAQ this year. Mainly, this is
down to work, or the lack of it. I know, we are all in the same boat. Well, it
has meant that I have had a lot less free time than I used to and as a result
the FAQ has not been kept as up to date as I would like. Sadly, the work I have
been doing is with those other database vendors, but we won't name them here.
Anyway, that is the sob story over and done with. If anyone thinks that they
would like to see more effort applied, I would be happy to hand the mantle
over. Since the amount of help that I have actually seen amounts to about
practically none, then I am sure I will not be over-run with offers! I will
definitely have more time come January and plan some serious work on it then.

Back to Top


How can I help with the FAQ?

I have had offers from a couple of people to write sections, but if you feel
that you are in a position to add support for a section, or if you have some
FAQs to add, please let me know. This is a resource that we should all
support, so send me the stuff and I will include it.

Typos and specific corrections are always very useful. Less useful is the
general I don't think that section x.y.z is very understandable. Sorry to sound
harsh, but what I need is actual text that is more readable. Better still is
actual HTML that makes it stand out and sing (if necessary)!

Currently I am looking for maintainers of the following sections Replication,
Adaptive Server Anywhere, IQ server, MPP Server and Open Server. I am not sure
whether to add a section for Omni Server. I sort of feel that since Omni has
been subsumed into ASE as CIS that any FAQs should really be incorporated
there. However, if you know of some good Omni gotchas or tips, whether they
are still there in CIS or not, please send them in. I certainly plan to have a
subsection of ASE dealing with CIS even if Omni does not get its own major
section. I also think that we need sections on some of the really new stuff.
Jaguar and the new engines also deserve a spot.

Another very useful way that you can help is in getting people to update their
links. I have seen lots of links recently, some still pointing to Pablo's
original, some pointing to Tom's site but referring to it as coming from the
SGI site.

Back to Top


Who do I tell about problems in the FAQ?

The current maintainer is David Owen ( and you can send
errors in the FAQ directly to me. If you have an FAQ item (both the question
and the answer) send it to and I will include it.

Do not send email to any of the officials at ISUG, they are simply hosting the
FAQ and are not responsible for its contents.

Also, do not send email to Sybase, they are not responsible for the contents
either. See the Disclaimer.

Back to Top


Acknowledgements and Thanks

Special thanks must go to the following people for their help in getting this
FAQ to where it is today.

* Pablo Sanchez for getting the FAQ off the ground in the first place and for
many years of dedicated work in maintaining it.

* Anthony Mandic ( for a million things. Patiently answering
questions in all of the Sybase news groups, without which most beginners
would be lost. For supporting and encouraging me in getting this FAQ
together and for providing some pretty neat graphics.

* The ISUG, especially Luc Van der Veurst ( and Michael
Peppler (, for hosting this FAQ and providing support
in setting up the website.

* The members of the various news groups and mailing lists who, like Anthony,
provide unstinting support. The list is fairly long, but I think that Bret
Halford ( deserves a mention. If you go to Google News
and do a search, he submits almost as many replies as Anthony.

Back to Top


Hall of Fame

I am not sure how Pablo chose his select list, there is certainly no question
as to their inclusion. I know that there are a couple of awards that the ISUG
give out each year for the people that the ISUG members believe have
contributed most to the Sybase community that year. I think that this section
should honour those people that deserve an award each and every year. If you
know of a candidate, let me know and I will consider his or her inclusion.
Self nominations are not acceptable :-)

The following people have made it to the Sybase FAQ Hall of Fame:

* Michael Peppler ( For Sybperl and all of the other
tools of which he is author or instigator plus the ceaseless support that
he provides through countless mailing lists, newsgroups and directly via

* Scott Gray ( Father of sqsh, much more than simply a
replacement for isql. How anyone developing or administering Sybase can
survive without it, I will never know.

* Pablo Sanchez ( Pablo got the first web based FAQ off the
ground, wrote most (all?) of the first edition and then maintained it for a
number of years. He did a fantastic job, building a resource that is
worth its weight in gold.

Back to Top


Copyright and Disclaimer


You are free to copy or distribute this FAQ in whole or in part, on any medium
you choose provided that you:

* include this Copyright and Disclaimer notice;
* do NOT distribute or copy, in any fashion, with the intention of making a
profit from its use;
* give FULL attribution to the original authors.


This FAQ is provided as is without any express or implied warranties. Whilst
every endeavour has been taken to ensure the accuracy of the information
contained within the articles, the author, nor any of the contributors, assume
responsibility for errors or omissions, or for damages resulting from the use
of the information contained herein.

If you are not happy about performing any of the suggestions contained within
this FAQ, you are probably better off calling Sybase Technical Support.


This site and all its contents belongs to the Sybase FAQ (

Unless explicitly stated in an article, all material within this FAQ is
copyrighted. The primary copyright holders are David Owen and Pablo Sanchez.
However, all contributed material is, and will remain, the property of the
respective authors and contributors.

Back to Top


1.1: Basic ASE Administration

1.1.1 What is SQL Server and ASE anyway?
1.1.2 How do I start/stop ASE when the CPU reboots?
1.1.3 How do I move tempdb off of the master device?
1.1.4 How do I correct timeslice -201?
1.1.5 The how's and why's on becoming Certified.
1.1.6 RAID and Sybase
1.1.7 How to swap a db device with another
1.1.8 Server naming and renaming
1.1.9 How do I interpret the tli strings in the interface file?
1.1.10 How can I tell the datetime my Server started?
1.1.11 Raw partitions or regular files?
1.1.12 Is Sybase Y2K (Y2000) compliant?
1.1.13 How can I run the ASE upgrade manually?
1.1.14 We have lost the sa password, what can we do?
1.1.15 How do I set a password to be null?
1.1.16 Does Sybase support Row Level Locking?
1.1.17 What platforms does ASE run on?
1.1.18 How do I backup databases > 64G on ASE prior to 12.x?

1.2: User Database Administration

1.2.1 Changing varchar(m) to varchar(n)
1.2.2 Frequently asked questions on Table partitioning
1.2.3 How do I manually drop a table?
1.2.4 Why not create all my columns varchar(255)?
1.2.5 What's a good example of a transaction?
1.2.6 What's a natural key?
1.2.7 Making a Stored Procedure invisible
1.2.8 Saving space when inserting rows monotonically
1.2.9 How to compute database fragmentation
1.2.10 Tasks a DBA should do...
1.2.11 How to implement database security
1.2.12 How to shrink a database
1.2.13 How do I turn on auditing of all SQL text sent to the server
1.2.14 sp_helpdb/sp_helpsegment is returning negative numbers

1.3: Advanced ASE Administration

1.3.1 How do I clear a log suspend'd connection?
1.3.2 What's the best value for cschedspins?
1.3.3 What traceflags are available?
1.3.4 How do I use traceflags 5101 and 5102?
1.3.5 What is cmaxpktsz good for?
1.3.6 What do all the parameters of a buildmaster -d<device> -yall mean?
1.3.7 What is CIS and how do I use it?
1.3.8 If the master device is full how do I make the master database
1.3.9 How do I run multiple versions of Sybase on the same server?
1.3.10 How do I capture a process's SQL?

1.4: General Troubleshooting

1. How do I turn off marked suspect on my database?
2. On startup, the transaction log of a database has filled and recovery has
suspended, what can I do?
3. Why do my page locks not get escalated to a table lock after 200 locks?

1.5: Performance and Tuning

1.5.1 What are the nitty gritty details on Performance and Tuning?
1.5.2 What is best way to use temp tables in an OLTP environment?
1.5.3 What's the difference between clustered and non-clustered indexes?
1.5.4 Optimistic versus pessimistic locking?
1.5.5 How do I force an index to be used?
1.5.6 Why place tempdb and log on low numbered devices?
1.5.7 Have I configured enough memory for ASE?
1.5.8 Why should I use stored procedures?
1.5.9 I don't understand showplan's output, please explain.
1.5.10 Poor man's sp_sysmon.
1.5.11 View MRU-LRU procedure cache chain.
1.5.12 Improving Text/Image Type Performance

1.6: Server Monitoring

1.6.1 What is Monitor Server and how do I configure it?
1.6.2 OK, that was easy, how do I configure a client?

2.1: Platform Specific Issues - Solaris

2.1.1 Should I run 32 or 64 bit ASE with Solaris?
2.1.2 What is Intimate Shared Memory or ISM?

2.2: Platform Specific Issues - NT/2000

2.2.1 How to Start ASE on Remote NT Servers
2.2.2 How to Configure More than 2G bytes of Memory for ASE on NT
2.2.3 Installation Issues

2.3: Platform Specific Issues - Linux

2.3.1 ASE on Linux FAQ

3: DBCC's

3.1 How do I set TS Role in order to run certain DBCCs...?
3.2 What are some of the hidden/trick DBCC commands?
3.3 Other sites with DBCC information.
3.4 Fixing a Munged Log

Performing any of the above may corrupt your ASE installation. Please do
not call Sybase Technical Support after screwing up ASE. Remember, always
take a dump of the master database and any other databases that are to be

4: isql

4.1 How do I hide my password using isql?
4.2 How do I remove row affected and/or dashes when using isql?
4.3 How do I pipe the output of one isql to another?
4.4 What alternatives to isql exist?
4.5 How can I make isql secure?

5: bcp

5.1 How do I bcp null dates?
5.2 Can I use a named pipe to bcp/dump data out or in?
5.3 How do I exclude a column?

6.1: SQL Fundamentals

6.1.1 Are there alternatives to row at a time processing?
6.1.2 When should I execute an sp_recompile?
6.1.3 What are the different types of locks and what do they mean?
6.1.4 What's the purpose of using holdlock?
6.1.5 What's the difference between an update in place versus a deferred
update? - see Q1.5.9
6.1.6 How do I find the oldest open transaction?
6.1.7 How do I check if log truncation is blocked?
6.1.8 The timestamp datatype
6.1.9 Stored Procedure Recompilation and Reresolution
6.1.10 How do I manipulate binary columns?
6.1.11 How do I remove duplicate rows from a table?

6.2: SQL Advanced

6.2.1 How to emulate the Oracle decode function/crosstab
6.2.2 How to implement if-then-else within a select-clause.
6.2.3 deleted due to copyright hassles with the publisher
6.2.4 How to pad with leading zeros an int or smallint.
6.2.5 Divide by zero and nulls.
6.2.6 Convert months to financial months.
6.2.7 Hierarchy traversal - BOMs.
6.2.8 Is it possible to call a UNIX command from within a stored
procedure or a trigger?
6.2.9 Information on Identities and Rolling your own Sequential Keys
6.2.10 How can I execute dynamic SQL with ASE
6.2.11 Is it possible to concatenate all the values from a column and
return a single row?
6.2.12 Selecting rows N to M without Oracle's rownum?
6.2.13 How can I return number of rows that are returned from a grouped
query without using a temporary table?

6.3: Useful SQL Tricks

6.3.1 How to feed the result set of one stored procedure into another.
6.3.2 Is it possible to do dynamic SQL before ASE 12?

7: Open Client

7.1 What is Open Client?
7.2 What is the difference between DB-lib and CT-lib?
7.3 What is this TDS protocol?
7.4 I have upgraded to MS SQL Server 7.0 and can no longer connect from
Sybase's isql.
7.5 The Basics of Connecting to Sybase
7.6 Connecting to ASE using ODBC
7.7 Which version of Open Client works with which ASE?
7.8 How do I tell the version of Open Client I am running?

9: Freeware

9.0 Where is all the code and why does Section 9 suddenly load in a
reasonable amount of time?

Stored Procedures

9.1.1 sp_freedevice - lists device, size, used and free.
9.1.2 sp_dos - This procedure graphically displays the scope of a
9.1.3 sp_whodo - augments sp_who by including additional columns: cpu,
9.1.4 sp__revroles - creates DDL to sp_role a mirror of your SQL
9.1.5 sp__rev_configure - creates DDL to sp_configure a mirror of your
SQL Server
9.1.6 sp_servermap - overview of your SQL Server
9.1.7 sp__create_crosstab - simplify crosstable queries
9.1.8 sp_ddl_create_table - creates DDL for all user tables in the
current database
9.1.9 sp_spaceused_table
9.1.10 SQL to determine the space used for an index.
9.1.11 sp_helpoptions - Shows what options are set for a database.
9.1.12 sp_days - returns days in current month.
9.1.13 sp__optdiag - optdiag from within isql
9.1.14 sp_desc - a simple list of a tables' columns
9.1.15 sp_lockconfig - Displays locking schemes for tables.

Shell Scripts

9.2.1 SQL and sh(1)to dynamically generate a dump/load database
9.2.2 update statistics script


9.3.1 SybPerl - Perl interface to Sybase.
9.3.2 - Sybperl script to reverse engineer a database.
9.3.3 - creates insert DDL for a table.
9.3.4 - converts

12: Miscellany

12.1 What can Sybase IQ do for me?
12.2 Net-review of Sybase books
12.3 email lists
12.4 Finding Information at Sybase


Adaptive Server Anywhere

0.0 Preamble
0.1 What is ASA?
0.2 On what platforms is ASA supported?
0.3 What applications is ASA good for?
0.4 When would I choose ASA over ASE?
0.5 Does ASA Support Replication?
0.6 What is ASA UltraLite?
0.7 Links for further information


Introduction to Replication Server

1.1 Introduction
1.2 Replication Server Components
1.3 What is the Difference Between SQL Remote and Replication

Replication Server Administration

2.1 How can I improve throughput?
2.2 Where should I install replication server?
2.3 Using large raw partitions with Replication Server on Unix.
2.4 How to replicate col = col + 1
2.5 What is the difference between an LTMs an a RepAgent?
2.6 Which Should I choose, RepAgent or LTM?

Replication Server Trouble Shooting

3.1 Why am I running out of locks on the replicate side?
3.2 Someone was playing with replication and now the transaction log
on OLTP is filling.

Additional Information/Links

4.1 Links
4.2 Newsgroups

David Owen

Apr 20, 2004, 9:45:02 AM4/20/04
Archive-name: databases/sybase-faq/part3

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Sybase Frequently Asked Questions

Sybase FAQ Home PageAdaptive Server Enterprise FAQAdaptive Server Anywhere FAQ

Repserver FAQSearch the FAQ

Sybase Replication Server

1. Introduction to Replication Server
2. Replication Server Administration
3. Troubleshooting Replication Server
4. Additional Information/Links

Introduction to Replication Server

1.1 Introduction
1.2 Replication Server Components
1.3 What is the Difference Between SQL Remote and Replication Server?

Thanks go to Manish I Shah for major help with this introduction.

next prev ASE FAQ


1.1 Introduction


What is Replication Server

Replication Server moves transactions (insert, updates and deletes) at the
table level from a source dataserver to one or more destination dataservers.
The dataserver could be ASE or other major DBMS flavour (including DB2,
Informix, Oracle). The source and destinations need not be of the same type.

What can it do ?

* Move data from one source to another.
* Move only a subset of data from source to destination. So, you can
subscribe to a subset of data, or a subset of the columns, in the source
table, e.g. select * from clients where state = NY
* Manipulation/transformation of data when moving from source to destination.
E.g. it can map data from a data-type in DB2 to an equivalent in Sybase.*
* Provide a warm-standby system. Can be incorporated with Open Switch to
provide a fairly seamless fail-over environment.
* Merge data from several source databases into one destination database
(could be for a warehouse type environment for example).
* Move data through a complicated network down to branch offices, say, only
sending the relevant data to each branch.

(* This is one of Sybase replication's real strengths, the ability to define
function string classes which allow the conversion of statements from one SQL
dialect to match the dialect of the destination machine. Ed)

How soon does the data move

The data moves asynchronously. The time it takes to reach the destination
depends on the size of your transaction, level of activity in that particular
database (a database as in Sybase systems), the length of the chain (one or
more replication servers that the transaction has to pass through to reach the
destination), the thickness of pipe (network), how busy your replication server
is etc. Usually, on a LAN, for small transactions, this is about a second.

Back to top


1.2 Replication Server Components



Primary Dataserver

The source of data where client applications enter/delete and modify data. As
mentioned before, this need not be ASE, it can be Microsoft SQL Server, Oracle,
DB2, Informix. (I know that I should get a complete list.)

Replication Agent/Log Transfer Manager

Log Transfer Manager (LTM) is a separate program/process which reads
transaction log from the source server and transfers them to the replication
server for further processing. With ASE 11.5, this has become part of ASE and
is now called the Replication Agent. However, you still need to use an LTM for
non-ASE sources. I imagine there is a version of LTM for each kind of source
(DB2, Informix, Oracle etc). When replication is active, you see one
connection per each replicated database in the source dataserver (sp_who).

Replication Server (s)

The replication server is an Open Server/Open Client application. The server
part receives transactions being sent by either the source ASE or the source
LTM. The client part sends these transactions to the target server which could
be another replication server or the final dataserver. As far as I know, the
server does not include the client component of any of the other DBMSes out of
the box.

Replicate (target) Dataserver

Server in which the final replication server (in the queue) will repeat the
transaction done on the primary. You will see a connection, one for each target
database, in the target dataserver when the replication server is actively
transferring data (when idle, the replication server disconnects or fades out
in replication terminology).

Back to top


1.3 What is the Difference Between Replication Server and SQL Remote?


Both SQL Remote and Replication Server perform replication. SQL Remote was
originally part of the Adaptive Server Anywhere tool kit and is intended for
intermittent replication. (The classic example is that of a salesman
connecting on a daily basis to upload sales and download new prices and
inventory.) Replication Server is intended for near real-time replication

Back to top


next prev ASE FAQ

Replication Server Administration

2.1 How can I improve throughput?
2.2 Where should I install replication server?
2.3 Using large raw partitions with Replication Server on Unix.
2.4 How to replicate col = col + 1
2.5 What is the difference between an LTMs an a RepAgent?
2.6 Which Should I choose, RepAgent or LTM?

next prev ASE FAQ


2.1 How can I improve throughput?


Check the Obvious

First, ensure that you are only replicating those parts of the system that need
to be replicated. Some of this is obvious. Don't replicate any table that
does not need to be replicated. Check that you are only replicating the
columns you need. Replication is very sophisticated and will allow you to
replicate both a subset of the columns as well as a subset of the rows.

Replicate Minimum Columns

Once the replication is set up and synchronised, it is only necessary to
replicate those parts of the primary system that actually change. You are only
replicating those rows and columns that need to be replicated, but you only
need to replicate the actual changes. Check that each replication definition
is defined using the clause:

create replication definition rep_def_name
with primary...
replicate minimal columns

Second Replication Server

This might be appropriate in a simple environment on systems with spare cycles
and limited space on the network. When Sybase replicates from a primary to a
replicate using only one replication server the data is transferred across the
network uncompressed. However, the communication between two replication
servers is compressed. By installing a second replication server it is
possible to dramatically reduce the bandwidth needed to replicate your data.

Dedicated Network Card

Obviously, if replication is sharing the same network resources that all of the
clients are using, there is the possibility for a bottleneck if the network
bandwidth is close to saturation. If a second replication server is not going
to cut it since you already have one or there are no spare cycles, then a
second network card may be the answer.

First, you will need to configure ASE to listen on two network connections.
This is relatively straightforward. There is no change to the client
configuration. They all continue to talk to Sybase using the same connection.
When defining the replication server, ensure that the interfaces/sql.ini entry
that it uses only has the second connection in it. This may involve some
jiggery pokery with environment variables, but should be possible, even on NT!
You need to be a little careful with network configuration. Sybase will
communicate with the two servers on the correct address, but if the underlying
operating system believes that both clients and repserver can be serviced by
the same card, then it will use the first card that it comes to. So, if you
had the situation that all of the clients, ASE and the replication server were
on, and the host running ASE had two cards onto this same segment,
then it would choose to route all packets through the first card. OK, so this
is a very simplistic error to correct, but similar things can happen with more
convoluted and, superficially, better thought out configurations.

+---------+ +-----------+ +-----------+
| |--> NE(1) --> All Clients... | | | |
| Primary | | repserver | | replicate |
| |--> NE(2) --------------------->| |-->| |
| | | | | |
+---------+ +-----------+ +-----------+

So, configure NE(1) to be on, say, and NE(2) to be on
and all should be well. OK, so my character art is not perfect, but I think
that you get the gist!

No Network Card

If RepServer resides on the same physical machine as either the primary or the
replicate, it is possible to use the localhost or loopback network device. The
loopback device is a network interface that connects back to itself without
going through the network interface card. It is almost always uses the IP
address So, by applying the technique described above, but instead
of using a dedicated network card, you use the loopback device. Obviously, the
two servers have to be on the same physical machine or it won't work!

Back to top


2.2 Where should I install replication server?


A seemingly trivial question, but one that can cause novices a bit of worry.

There are three answers: on the primary machine, on the replicate machine or on
a completely separate machine. There is no right answer, and if you are doing
an initial install it probably pays to consider the future, consider the
proposed configuration and have a look at the load on the available machines.

It is probably fair to say that replication is not power hungry but neither is
it free. If the primary is only just about coping with its current load, then
it might be as well looking into hosting it on another machine. The argument
applies to the replicate. If you think that network bandwidth may be an issue,
and you may have to add a second replication server, you may be better off
starting with repserver running on the primary. It is marginally easier to add
a repserver to an existing configuration if the first repserver is on the

Remember that a production replication server on Unix will require raw devices
for the stable devices and that these can be more than 2GB in size. If you are
restricted in the number of raw partitions you have available on a particular
machine, then this may have a bearing. See Q2.3.

Installing replication server on its own machine will, of course, introduce all
sorts of problems of its own, as well as answering some. The load on the
primary or the replicate is reduced considerably, but you are definitely going
to add some load to the network. Remember that ASE->Rep and Rep->ASE is
uncompressed. It is only Rep->Rep that is compressed.

Back to top


2.3 Using large raw partitions with Replication Server on Unix.


It is a good practice with production installations of Replication Server on
Unix that you use raw partitions for the stable devices. This is for just the
same reason that production ASE's use raw partitions. Raw devices can be a
maximum of 2GB with replication server up to release 11.5. (I have not checked

In order to utilise a raw partition that is greater than 2GB in size you can do
the following (remember all of the cautionary warnings about trying this sort
of stuff out in development first!):

add partition firstpartition on '/dev/rdsk/c0t0d0s0' with size 2024
add partition secondpartition on '/dev/rdsk/c0t0d0s0' with size 2024
starting at 2048

Notice that the initial partition is sized at 2024MB and not 2048. I have not
found this in the documentation, but replication certainly seems to have a
problem allocating a full 2GB. Interestingly, do the same operation through
Rep Server Manager and Sybase central caused no problems at all.

Back to top


2.4 How to replicate col = col + 1


Firstly. While the rule that you never update a primary key may be a
philosophical choice in a non-replicated system, it is an architectural
requirement of a replicated system.

If you use simple data replication, and your primary table is:


and you issue a:

update table set id=id+1

Rep server will do this in the replicate:

begin tran
update table set id=2 where id=1
update table set id=3 where id=2
update table set id=4 where id=3
commit tran

Hands up all who can see a bit of a problem with this! Remember, repserver
doesn't replicate statements, it replicates the results of statements.

One way to perform this update is to build a stored procedure on both sides
that executes the necessary update and replicate the stored procedure call.

Back to top


2.5 What is the difference between an LTM and a RepAgent?


As described in Section 1.2, Log Transfer Managers (LTMs) and RepAgents are the
processes that transfer data between ASE and the Replication Server.

LTMs were delivered with the first releases of Replication Server. Each LTM is
a separate process at the operating system level that runs along side ASE and
Replication Server. As with ASE and Replication Server, a RUN_<ltm_server> and
configuration file is required for each LTM. One LTM is required for each
database being replicated.

Along with ASE 11.5 a new concept was introduced, that of RepAgent. I am not
sure if you needed to use RepServer 11.5 as well, or whether the RepAgents
could talk to earlier versions of Replication Server. Each RepAgent is, in
effect, a slot-in replacement for an LTM. However, instead of running as
separate operating system process, it runs as a thread within ASE. Pretty much
all of the requirements for replication using an LTM apply to the RepAgents.
One per database being replicated, etc. but now you do not need to have
separate configuration files.

Back to top


2.6 Which should I use, RepAgent or LTM?


The differences between RepAgents and LTMs are discussed in Section 2.5.
Which then to choose. There are pros and cons to both, however, I think that
it should be stated up front that RepAgents are the latest offering and I
believe that Sybase would expect you you to use that. Certainly the
documentation for LTMs is a little buried implying that they do not consider it
to be as current as LTMs.

LTM Cons:

* Older technology. Not sure if it is being actively supported.
* Not integrated within ASE, so there is a (small) performance penalty.
* Separate processes, so need additional monitoring in production

LTM Pros:

* Possible to restart LTM without having to restart ASE.

RepAgent Cons

* If it crashes it is possible that you will have to restart ASE in order to
restart RepAgent.

RepAgent Pros

* Latest, and presumably greatest, offering.
* Tightly integrated with ASE so good performance.
* Less to manage, no extra entries in the interfaces file.

Back to top


next prev ASE FAQ

Replication Server Trouble Shooting

3.1 Why am I running out of locks on the replicate side?
3.2 Someone was playing with replication and now the transaction log on
OLTP is filling.

next prev ASE FAQ


3.1 Why am I running out of locks on the replicate side?


Sybase replication works by taking each transaction that occurs in the primary
dataserver and applying to the replicate. Since replication works on the
transaction log, a single, atomic, update on the primary side that updates a
million rows will be translated into a million single row updates. This may
seem very strange but is a simple consequence of how it works. On the primary,
this million row update will attempt to escalate the locks that it has taken
out to an exclusive table lock. However, on the replicate side each row is
updated individually, much as if they were being updated within a cursor loop.
Now, Sybase only tries to escalate locks from a single atomic statement (see
ASE Qx.y), so it will never try to escalate the lock. However, since the
updates are taking place within a single transaction, Sybase will need to take
out enough page locks to lock the million rows.

So, how much should you increase the locks parameter on the replicate side? A
good rule of thumb might be double it or add 40,000 whichever is the larger.
This has certainly worked for us.

Back to top


3.2 Someone was playing with replication and now the transaction log on OLTP
is filling.


Once replication has been configured, ASE adds another marker to the
transaction log. The first marker is the conventional one that marks which
transactions have had their data written to disk. The second is there to
ensure that the transactions have also been replicated. Clearly, if someone
installed replication and did not clean up properly after themselves, this
marker will still be there and consequently the transaction log will be filling
up. If you are certain that replication is not being used on your system, you
can disable the secondary truncation marker with the following commands:

1> use <database>
2> go
1> dbcc settrunc(ltm, ignore)
2> go

The above code is the normal mechanism for disabling the trucation point. I
have never had a problem with it. However, an alternative mechanism for
disabling the truncation point is given below. I do not know if it will work
in situations that the previous example won't, or if it works for databases
that are damaged or what. If someone knows when you use it and why, please let
me know (

1> sp_role "grant", sybase_ts_role, sa
2> go
1> set role sybase_ts_role on
2> go
1> dbcc dbrepair(dbname, ltmignore)
2> go
1> sp_role "revoke", sybase_ts_role, sa
2> go

This scenario is also very common if you load a copy of your replicated
production database into development.

Back to top


next prev ASE FAQ

Additional Information/Links

4.1 Links
4.2 Newsgroups

next prev ASE FAQ


4.1 Links


Thierry Antinolfi has a replication FAQ at his site
dbadevil that covers a lot of good stuff.

Rob Verschoor has a 'Replication Server Tips & Tricks' section on his site, as
well as an indispensible quick reference guide!

Back to top


4.2 Newsgroups


There are a number of newsgroups that can deal with questions. Sybase have
several in their own forums area.

For Replication Server:


for SQL Remote and the issues of replicating with ASA:


and of course, there is always the ubiquitous


Back to top


next prev ASE FAQ

David Owen

Apr 20, 2004, 9:45:03 AM4/20/04
Archive-name: databases/sybase-faq/part4

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.
Sybase Frequently Asked Questions

Sybase FAQ Home PageAdaptive Server Enterprise FAQAdaptive Server Anywhere FAQ

Repserver FAQSearch the FAQ

Adaptive Server Enterprise

0. What's in a name?
1. ASE Administration
1.1 Basic Administration
1.2 User Database Administration
1.3 Advanced Administration
1.4 General Troubleshooting
1.5 Performance and Tuning
1.6 Server Monitoring
2. Platform Specific Issues
2.1 Solaris
2.2 NT
2.3 Linux
3. DBCC's
4. isql
5. bcp
6. SQL Development
6.1 SQL Fundamentals
6.2 SQL Advanced
6.3 Useful SQL Tricks
7. Open Client
9. Freeware
10. Sybase Technical News
11. Additional Information
12. Miscellany


What's in a name?

Throughout this FAQ you will find references to SQL Server and, starting with
this release, ASE or Adaptive Server Enterprise to give it its full name. You
might also be a little further confused, since Microsoft also seem to have a
product called SQL Server.

Well, back at about release 4.2 of Sybase SQL Server, the products were exactly
the same. Microsoft were to do the port to NT. Well, it is pretty well
documented, but there was a falling out. Both companies kept the same name for
their data servers and confusion began to reign. In an attempt to try and sort
this out, Sybase renamed their product Adaptive Server Enterprise (ASE)
starting with version 11.5.

I found this quote in a Sybase manual the other day:

Since changing the name of Sybase SQL Server to Adaptive Server Enterprise,
Sybase uses the names Adaptive Server and Adaptive Server Enterprise to refer
collectively to all supported versions of the Sybase SQL Server and Adaptive
Server Enterprise. Version-specific references to Adaptive Server or SQL Server
include version numbers.

I will endeavour to try and do the same within the FAQ, but the job is far from

Back to Top

Basic ASE Administration

1.1.1 What is SQL Server and ASE anyway?
1.1.2 How do I start/stop ASE when the CPU reboots?
1.1.3 How do I move tempdb off of the master device?
1.1.4 How do I correct timeslice -201?
1.1.5 The how's and why's on becoming Certified.
1.1.6 RAID and Sybase
1.1.7 How to swap a db device with another
1.1.8 Server naming and renaming
1.1.9 How do I interpret the tli strings in the interface file?
1.1.10 How can I tell the datetime my Server started?
1.1.11 Raw partitions or regular files?
1.1.12 Is Sybase Y2K (Y2000) compliant?
1.1.13 How can I run the ASE upgrade manually?
1.1.14 We have lost the sa password, what can we do?
1.1.15 How do I set a password to be null?
1.1.16 Does Sybase support Row Level Locking?
1.1.17 What platforms does ASE run on?
1.1.18 How do I backup databases > 64G on ASE prior to 12.x?

User Database Administration # ASE FAQ


1.1.1: What is SQL Server and ASE?



Before Sybase System 10 (as they call it) we had Sybase 4.x. Sybase System 10
has some significant improvements over Sybase 4.x product line. Namely:

* the ability to allocate more memory to the dataserver without degrading its
* the ability to have more than one database engine to take advantage of
multi-processor cpu machines.
* a minimally intrusive process to perform database and transaction dumps.

Background and More Terminology

A ASE (SQL Server) is simply a Unix process. It is also known as the database
engine. It has multiple threads to handle asynchronous I/O and other tasks. The
number of threads spawned is the number of engines (more on this in a second)
times five. This is the current implementation of Sybase System 10, 10.0.1 and
10.0.2 on IRIX 5.3.

Each ASE allocates the following resources from a host machine:

* memory and
* raw partition space.

Each ASE can have up to 255 databases. In most implementations the number of
databases is limited to what seems reasonable based on the load on the ASE.
That is, it would be impractical to house all of a large company's databases
under one ASE because the ASE (a Unix process) will become overloaded.

That's where the DBAs experience comes in with interrogation of the user
community to determine how much activity is going to result on a given database
or databases and from that we determine whether to create a new ASE or to house
the new database under an existing ASE. We do make mistakes (and businesses
grow) and have to move databases from one ASE to another. At times ASEs need to
move from one CPU server to another.

With Sybase System 10, each ASE can be configured to have more than one engine
(each engine is again a Unix process). There's one primary engine that is the
master engine and the rest of the engines are subordinates. They are assigned
tasks by the master.

Interprocess communication among all these engines is accomplished with shared

Some times when a DBA issues a Unix kill command to extinguish a maverick
ASE, the subordinate engines are forgotten. This leaves the shared memory
allocated and eventually we may get in to situations where swapping occurs
because this memory is locked. To find engines that belong to no master
ASE, simple look for engines owned by /etc/init (process id 1). These
engines can be killed -- this is just FYI and is a DBA duty.

Before presenting an example of a ASE, some other topics should be covered.


An ASE has connections to it. A connection can be viewed as a user login but
it's not necessarily so. That is, a client (a user) can spark up multiple
instances of their application and each client establishes its own connection
to the ASE. Some clients may require two or more per invocation. So typically
DBA's are only concerned with the number of connections because the number of
users typically does not provide sufficient information for us to do our job.

Connections take up ASE resources, namely memory, leaving less memory for
the ASEs' available cache.

ASE Buffer Cache

In Sybase 4.0.1 there was a limit to the amount of memory that could be
allocated to a ASE. It was around 80MB, with 40MB being the typical max. This
was due to internal implementations of Sybase's data structures.

With Sybase System 10 there really was no limit. For instance, we had an ASE
cranked up to 300MB under 10. With System 11 and 12 this has been further
extended. ASE's with 4G bytes of memory are not uncommon. I have not heard of
an 11.9.3 or a 12 server with more that 4G bytes, but I am sure that they are
not far away.

The memory in an ASE is primarily used to cache data pages from disk. Consider
that the ASE is a light weight Operating System: handling user (connections),
allocating memory to users, keeping track of which data pages need to be
flushed to disk and the sort. Very sophisticated and complex. Obviously if a
data page is found in memory it's much faster to retrieve than going out to

Each connection takes away a little bit from the available memory that is used
to cache disk pages. Upon startup, the ASE pre-allocates the memory that is
needed for each connection so it's not prudent to configure 500 connections
when only 300 are needed. We'd waste 200 connections and the memory associated
with that. On the other hand, it is also imprudent to under configure the
number of connections; users have a way of soaking up a resource (like an ASE)
and if users have all the connections a DBA cannot get into the server to
allocate more connections.

One of the neat things about an ASE is that it reaches (just like a Unix
process) a working set. That is, upon startup it'll do a lot of physical I/O's
to seed its cache, to get lookup information for typical transactions and the
like. So initially, the first users have heavy hits because their requests have
to be performed as a physical I/O. Subsequent transactions have less physical I
/O and more logical I/O's. Logical I/O is an I/O that is satisfied in the ASEs'
buffer cache. Obviously, this is the preferred condition.


We throw around terms like everyone is supposed to know this high tech lingo.
The problem is that they are two different animals that require a ASE to be
tuned accordingly for each.

Well, here's the low down.

Decision Support System
Online Transaction Processing

What do these mean? OLTP applications are those that have very short orders of
work for each connection: fetch this row and with the results of it update one
or two other rows. Basically, small number of rows affected per transaction in
rapid sucession, with no significant wait times between operations in a

DSS is the lumbering elephant in the database world (unless you do some
tricks... out of this scope). DSS requires a user to comb through gobs of data
to aggregate some values. So the transactions typically involve thousands of
rows. Big difference than OLTP.

We never want to have DSS and OLTP on the same ASE because the nature of OLTP
is to grab things quickly but the nature of DSS is to stick around for a long
time reading tons of information and summarizing the results.

What a DSS application does is flush out the ASE's data page cache because of
the tremendous amount of I/O's. This is obviously very bad for OTLP
applications because the small transactions are now hurt by this trauma. When
it was only OLTP a great percentage of I/O was logical (satisfied in the
cache); now transactions must perform physical I/O.

That's why it's good not to mix DSS and OLTP if at all possible.

If mixing them cannot be avoided, then you need to think carefully about how
you configure your server. Use named data caches to ensure that the very
different natures of OLTP and DSS do not conflict with each other. If you
tables that are shared, consider using dirty reads for the DSS applications if
at all possible, since this will help not to block the OLTP side.

Asynchronous I/O

Why async I/O? The idea is that in a typical online transaction processing
(OLTP) application, you have many connections (over 200 connections) and short
transactions: get this row, update that row. These transactions are typically
spread across different tables of the databases. The ASE can then perform each
one of these asynchronously without having to wait for others to finish. Hence
the importance of having async I/O fixed on our platform.


Sybase System 10 can have more than one engine (as stated above). Sybase has
trace flags to pin the engines to a given CPU processor but we typically don't
do this. It appears that the master engine goes to processor 0 and subsequent
subordinates to the next processor.

Currently, Sybase does not scale linearly. That is, five engines do not make
Sybase perform five times as fast however we do max out with four engines.
After that performance starts to degrade. This is supposed to be fixed with
Sybase System 11.

Putting Everything Together

As previously mentioned, an ASE is a collection of databases with connections
(that are the users) to apply and retrieve information to and from these
containers of information (databases).

The ASE is built and its master device is typically built over a medium sized
(50MB) raw partition. The tempdb is built over a cooked (regular - as opposed
to a raw device) file system to realize any performance gains by buffered
writes. The databases themselves are built over the raw logical devices to
ensure their integrity. (Note: in System 12 you can use the dsync flag to
ensure that writes to file system devices are secure.

Physical and Logical Devices

Sybase likes to live in its own little world. This shields the DBA from the
outside world known as Unix, VMS or NT. However, it needs to have a conduit to
the outside world and this is accomplished via devices.

All physical devices are mapped to logical devices. That is, given a physical
device (such as /lv1/dumps/tempdb_01.efs or /dev/rdsk/dks1ds0) it is mapped by
the DBA to a logical device. Depending on the type of the device, it is
allocated, by the DBA, to the appropriate place (vague enough?).

Okay, let's try and clear this up...

Dump Device

The DBA may decide to create a device for dumping the database nightly. The DBA
needs to create a dump device.

We'll call that logically in the database datadump_for_my_db but we'll map it
to the physical world as /lv1/dumps/in_your_eye.dat So the DBA will write a
script that connects to the ASE and issues a command like this:

dump database my_stinking_db to datadump_for_my_db

and the backupserver (out of this scope) takes the contents of my_stinking_db
and writes it out to the disk file /lv1/dumps/in_your_eye.dat

That's a dump device. The thing is that it's not preallocated. This special
device is simply a window to the operating system.

Data and Log Devices

Ah, now we are getting into the world of pre-allocation. Databases are built
over raw partitions. The reason for this is because Sybase needs to be
guaranteed that all its writes complete successfully. Otherwise, if it posted
to a file system buffer (as in a cooked file system) and the machine crashed,
as far as Sybase is concerned the write was committed. It was not, however, and
integrity of the database was lost. That is why Sybase needs raw partitions.
But back to the matter at hand...

When building a new ASE, the DBA determines how much space they'll need for all
the databases that will be housed in this ASE.

Each production database is composed of data and log.

The data is where the actual information resides. The log is where the changes
are kept. That is, every row that is updated/deleted/inserted gets placed into
the log portion then applied to the data portion of the database.

That's why DBA strives to place the raw devices for logs on separate disks
because everything has to single thread through the log.

A transaction is a collection of SQL statements (insert/delete/update) that are
grouped together to form a single unit of work. Typically they map very closely
to the business.

I'll quote the Sybase ASE Administration Guide on the role of the log:

The transaction log is a write-ahead log. When a user issues a statement
that would modify the database, ASE automatically writes the changes to the
log. After all changes for a statement have been recorded in the log, they
are written to an in-cache copy of the data page. The data page remains in
cache until the memory is needed for another database page. At that time,
it is written to disk. If any statement in a transaction fails to complete,
ASE reverses all changes made by the transaction. ASE writes an "end
transaction" record to the log at the end of each transaction, recording
the status (success or failure) of the transaction

As such, the log will grow as user connections affect changes to the database.
The need arises to then clear out the log of all transactions that have been
flushed to disk. This is performed by issuing the following command:

dump transaction my_stinking_db to logdump_for_my_db

The ASE will write to the dumpdevice all transactions that have been committed
to disk and will delete the entries from its copy, thus freeing up space in the
log. Dumping of the transaction logs is accomplished via cron (the Unix
scheduler, NT users would have to resort to at or some third party tool) . We
schedule the heavily hit databases every 20 minutes during peak times.

A single user can fill up the log by having begin transaction with no
corresponding commit/rollback transaction. This is because all their
changes are being applied to the log as an open-ended transaction, which is
never closed. This open-ended transaction cannot be flushed from the log,
and therefore grows until it occupies all of the free space on the log

And the way we dump it is with a dump device. :-)

An Example

If the DBA has four databases to plop on this ASE and they need a total of
800MB of data and 100MB of log (because that's what really matters to us), then
they'd probably do something like this:

1. allocate sufficient raw devices to cover the data portion of all the
2. allocate sufficient raw devices to cover the log portion of all the
3. start allocating the databases to the devices.

For example, assuming the following database requirements:

| | | |
| DB | Data | Log |
| a | 300 | 30 |
| b | 400 | 40 |
| c | 100 | 10 |

and the following devices:

| Logical | Physical | Size |
| | /dev/ | |
| dks3d1s2_data | rdsk/ | 500 |
| | dks3d1s2 | |
| | /dev/ | |
| dks4d1s2_data | rdsk/ | 500 |
| | dks4d1s2 | |
| | /dev/ | |
| dks5d1s0_log | rdsk/ | 200 |
| | dks5d1s0 | |

then the DBA may elect to create the databases as follows:

create database a on dks3d1s2_data = 300 log on dks5d1s0_log = 30
create database b on dks4d1s2_data = 400 log on dks5d1s0_log = 40
create database c on dks3d1s2_data = 50, dks4d1s2_data = 50 log on dks5d1s0_log = 10

Some of the devices will have extra space available because out database
allocations didn't use up all the space. That's fine because it can be used for
future growth. While the Sybase ASE is running, no other Sybase ASE can
re-allocate these physical devices.


TempDB is simply a scratch pad database. It gets recreated when a SQL Server is
rebooted. The information held in this database is temporary data. A query may
build a temporary table to assist it; the Sybase optimizer may decide to create
a temporary table to assist itself.

Since this is an area of constant activity we create this database over a
cooked file system which has historically proven to have better performance
than raw - due to the buffered writes provided by the Operating System.

Port Numbers

When creating a new ASE, we allocate a port to it (currently, DBA reserves
ports 1500 through 1899 for its use). We then map a host name to the different
ports: hera, fddi-hera and so forth. We can actually have more than one port
number for an ASE but we typically don't do this.

Back to top


1.1.2: How to start/stop ASE when CPU reboots


Below is an example of the various files (on Irix) that are needed to start/
stop an ASE. The information can easily be extended to any UNIX platform.

The idea is to allow as much flexibility to the two classes of administrators
who manage the machine:

* The System Administrator
* The Database Administrator

Any errors introduced by the DBA will not interfere with the System
Administrator's job.

With that in mind we have the system startup/shutdown file /etc/init.d/sybase
invoking a script defined by the DBA: /usr/sybase/sys.config/


On some operating systems this file must be linked to a corresponding entry in
/etc/rc.0 and /etc/rc.2 -- see rc0(1M) and rc2(1M)

# last modified: 10/17/95, sr.
# Make symbolic links so this file will be called during system stop/start.
# ln -s /etc/init.d/sybase /etc/rc0.d/K19sybase
# ln -s /etc/init.d/sybase /etc/rc2.d/S99sybase
# chkconfig -f sybase on

# Sybase System-wide configuration files

if $IS_ON verbose ; then # For a verbose startup and shutdown
else # For a quiet startup and shutdown

case "$1" in
if $IS_ON sybase; then
if [ -x $CONFIG/start.sybase ]; then
$ECHO "starting Sybase servers"
/bin/su - sybase -c "$CONFIG/start.sybase $VERBOSE &"
<error condition>

if $IS_ON sybase; then
if [ -x $CONFIG/stop.sybase ]; then
$ECHO "stopping Sybase servers"
/bin/su - sybase -c "$CONFIG/stop.sybase $VERBOSE &"
<error condition>

echo "usage: $0 {start|stop}"



#!/bin/sh -a
# Script to start sybase
# NOTE: different versions of sybase exist under /usr/sybase/{version}
# Determine if we need to spew our output
if [ "$1" != "spew" ] ; then
OUTPUT=">/dev/null 2>&1"
# 10.0.2 servers
cd $HOME
# Start the backup server
eval install/startserver -f install/RUN_BU_KEPLER_1002_52_01 $OUTPUT
# Start the dataservers
# Wait two seconds between starts to minimize trauma to CPU server
eval install/startserver -f install/RUN_FAC_WWOPR $OUTPUT
sleep 2
eval install/startserver -f install/RUN_MAG_LOAD $OUTPUT
exit 0


# Script to stop sybase
# Determine if we need to spew our output
if [ -z "$1" ] ; then
OUTPUT=">/dev/null 2>&1"
eval killall -15 $OUTPUT dataserver backupserver sybmultbuf
sleep 2
# if they didn't die, kill 'em now...
eval killall -9 $OUTPUT dataserver backupserver sybmultbuf
exit 0

If your platform doesn't support killall, it can easily be simulated as

# Simple killall simulation...
# $1 = signal
# $2 = process_name
# no error checking but assume first parameter is signal...
# what ya want for free? :-)
kill -$1 `ps -ef | fgrep $2 | fgrep -v fgrep | awk '{ print $1 }'`

Back to top


1.1.3: How do I move tempdb off of the Master Device?


There used to be a section in the FAQ describing how to drop all of tempdb's
devices physically from the master device. This can make recovery of the
server impossible in case of a serious error and so it strongly recommended
that you do not do this but simply drop the segments as outlined below.

Sybase TS Preferred Method of Moving tempdb off the Master Device.

This is the Sybase TS method of removing most activity from the master device:

1. Alter tempdb on another device:
1> alter database tempdb on ...
2> go
2. Use the tempdb:
1> use tempdb
2> go
3. Drop the segments:
1> sp_dropsegment "default", tempdb, master
2> go
1> sp_dropsegment "logsegment", tempdb, master
2> go
1> sp_dropsegment "system", tempdb, master
2> go

Note that there is still some activity on the master device. On a three
connection test that I ran:

while ( 1 = 1 )
create table #x (col_a int)
drop table #x

there was one write per second. Not bad.

An Alternative

(I recently did some bench marks comparing this method, the previous method
and a combination of both. According to sp_sysmon there was no difference
in activity at all. I leave it here just in case it proves useful to

The idea of this handy script is to simply fill the first 2MB of tempdb thus
effectively blocking anyone else from using it. The slight gotcha with this
script, since we're using model, is that all subsequent database creates will
also have tempdb_filler installed. This is easily remedied by dropping the
table after creating a new database.

This script works because tempdb is rebuilt every time the ASE is rebooted.
Very nice trick!

/* this isql script creates a table in the model database. */
/* Since tempdb is created from the model database when the */
/* server is started, this effectively moves the active */
/* portion of tempdb off of the master device. */

use model

/* note: 2k row size */
create table tempdb_filler(
a char(255) not null,
b char(255) not null,
c char(255) not null,
d char(255) not null,
e char(255) not null

/* insert 1024 rows */
declare @i int
select @i = 1
while (@i <= 1024)
insert into tempdb_filler values('a','b','c','d','e')
if (@i % 100 = 0) /* dump the transaction every 100 rows */
dump tran model with truncate_only
select @i=@i+1

Back to top


1.1.4: How do I correct timeslice -201


(Note, this procedure is only really necessary with pre-11.x systems. In
system 11 systems, these parameters are tunable using sp_configure.)

Why Increase It?

Basically, it will allow a task to be scheduled onto the CPU for a longer time.
Each task on the system is scheduled onto the CPU for a fixed period of time,
called the timeslice, during which it does some work, which is resumed when its
next turn comes around.

The process has up until the value of ctimemax (a config block variable) to
finish its task. As the task is working away, the scheduler counts down
ctimemax units. When it gets to the value of ctimemax - 1, if it gets stuck and
for some reason cannot be taken off the CPU, then a timeslice error gets
generated and the process gets infected.

On the other hand, ASE will allow a server process to run as long as it needs
to. It will not swap the process out for another process to run. The process
will decide when it is "done" with the server CPU. If, however, a process goes
on and on and never relinquishes the server CPU, then Server will timeslice the

Potential Fix

1. Shutdown the ASE
2. %buildmaster -dyour_device -yctimemax=2000
3. Restart your ASE. If the problem persists contact Sybase Technical Support
notifying them what you have done already.

Back to top


1.1.5: Certified Sybase Professional


There have been changes in the process of becoming a Sybase Certified
Professional. There's a very informative link at
education/profcert, Professional Certification.

Rob Verschoor has put together some good stuff on his pages ( http:// that have pretty much all that you need
to know. He also has a quiz which is intended to test each and everyone's
knowledge of ASE and RepServer.

Sybase have released some sample questions (look for them at http:// The GUI requires MS Windows (at the time of
writing), but they are definitely a sample of what you will be asked. There are
also a couple of CDs available with yet more questions on them.

The Certification Kickback

There have been a couple of articles recently covering the kickback that seems
to be happening as far as certification is concerned. Serveral HR people have
said that if a person's CV (resume) is sent in covered in certifications then
it goes straight into the bit bucket. I do not know if this is true or not, but
one thing that you might wish to consider is the preparation of two CVs, one
with certifications, one without. If the job request specifies certification is
necessary, then send in the appropriate CV. If it does not specifiy
certification, send in the clean version. If you go into the interview for a
job that did not specify certifications up front and the interviewer starts
going about you not being certificated, you simply produce your card as proof.


1.1.6: RAID and Sybase


Here's a short summary of what you need to know about Sybase and RAID.

The newsgroup has a detailed FAQ on RAID, but here are a few


RAID means several things at once. It provides increased performance through
disk striping, and/or resistance to hardware failure through either mirroring
(fast) or parity (slower but cheaper).


RAID 0 is just striping. It allows you to read and write quickly, but provides
no protection against failure.


RAID 1 is just mirroring. It protects you against failure, and generally reads
and writes as fast as a normal disk. It uses twice as many disks as normal (and
sends twice as much data across your SCSI bus, but most machines have plenty of
extra capacity on their SCSI busses.)

Sybase mirroring always reads from the primary copy, so it does not
increase read performance.

RAID 0+1

RAID 0+1 (also called RAID 10) is striping and mirroring together. This gives
you the highest read and write performance of any of the raid options, but uses
twice as many disks as normal.


RAID 4 and 5 have disk striping and use 1 extra disk to provide parity. Various
vendors have various optimizations, but this RAID level is generally much
slower at writes than any other kind of RAID.


I am not sure if this is a genuine RAID standard, further checking on your part
is required.


Most hardware RAID controllers also provide a battery-backed RAM cache for
writing. This is very useful, because it allows the disk to claim that the
write succeeded before it has done anything. If there is a power failure, the
information will (hopefully) be written to disk when the power is restored. The
cache is very important because database log writes cause the process doing the
writes to stop until the write is successful. Systems with write caching thus
complete transactions much more quickly than systems without.

What RAID levels should my data, log, etc be on? Well, the log disk is
frequently written, so it should not be on RAID 4 or 5. If your data is
infrequently written, you could use RAID 4 or 5 for it, because you don't mind
that writes are slow. If your data is frequently written, you should use RAID
0+1 for it. Striping your data is a very effective way of avoiding any one disk
becoming a hot-spot. Traditionally Sybase databases were divided among devices
by a human attempting to determine where the hot-spots are. Striping does this
in a straight-forward fashion, and also continues to work if your data access
patterns change.

Your tempdb is data but it is frequently written, so it should not be on RAID 4
or 5.

If your RAID controller does not allow you to create several different kinds of
RAID volumes on it, then your only hope is to create a huge RAID 0+1 set. If
your RAID controller does not support RAID 0+1, you shouldn't be using it for
database work.

Back to top


1.1.7: How to swap a db device with another


Here are four approaches. Before attempting any of the following: Backup,
Backup, Backup.

Dump and Restore

1. Backup the databases on the device, drop the databases, drop the devices.
2. Rebuild the new devices.
3. Rebuild the databases (Make sure you recreate the fragments correctly - See
Ed Barlow's scripts ( for an sp that
helps you do this if you've lost your notes. Failure to do this will
possibly lead to data on log segments and log on data segments).
4. Reload the database dumps!

Twiddle the Data Dictionary - for brave experts only.

1. Shut down the server.
2. Do a physical dump (using dd(1), or such utility) of the device to be
3. Load the dump to the new device
4. Edit the data dictionary (sysdevices.physname) to point to the new device.

The Mirror Trick

1. Create a mirror of the old device, on the new device.
2. Unmirror the primary device, thereby making the _backup_ the primary
3. Repeat this for all devices until the old disk is free.

dd (Unix only)

(This option is no use if you need to move a device now, rather if you
anticipate moving a device at some point in the future.)

You may want to use this approach for creating any database.

Create (or use) a directory for symbolic links to the devices you wish to use.
Then create your database, but instead of going to /dev/device, go to /
directory/symlink - When it comes time to move your devices, you shut down the
server, simply dd(1) the data from the old device to the new device, recreate
the symbolic links to the new device and restart the ASE. Simple as that.

Backups are a requisite in all cases, just in case.

Back to top


1.1.8: Server naming and renaming


There are three totally separate places where ASE names reside, causing much

ASE Host Machine interfaces File

A master entry in here for server TEST will provide the network information
that the server is expected to listen on. The -S parameter to the dataserver
executable tells the server which entry to look for, so in the RUN_TEST file,
-STEST will tell the dataserver to look for the entry under TEST in the
interfaces file and listen on any network parameters specified by 'master'

master tcp ether hpsrv1 1200
query tcp ether hpsrv1 1200

Note that preceding the master/query entries there's a tab.

This is as far as the name TEST is used. Without further configuration the
server does not know its name is TEST, nor do any client applications.
Typically there will also be query entries under TEST in the local interfaces
file, and client programs running on the same machine as the server will pick
this connection information up. However, there is nothing to stop the query
entry being duplicated under another name entirely in the same interfaces file.

query tcp ether hpsrv1 1200

isql -STEST or isql -SARTHUR will connect to the same server. The name is
simply a search parameter into the interfaces file.

Client Machine interfaces File

Again, as the server name specified to the client is simply a search parameter
for Open Client into the interfaces file, SQL.INI or WIN.INI the name is
largely irrelevant. It is often set to something that means something to the
users, especially where they might have a choice of servers to connect to. Also
multiple query entries can be set to point to the same server, possibly using
different network protocols. eg. if TEST has the following master entries on
the host machine:

master tli spx /dev/nspx/ \xC12082580000000000012110
master tcp ether hpsrv1 1200

Then the client can have a meaningful name:

query tcp ether hpsrv1 1200

or alternative protocols:

query tcp ether hpsrv1 1200
query tli spx /dev/nspx/ \xC12082580000000000012110


This system table holds information about remote ASEs that you might want to
connect to, and also provides a method of naming the local server.

Entries are added using the sp_addserver system procedure - add a remote server
with this format:

sp_addserver server_name, null, network_name

server_name is any name you wish to refer to a remote server by, but
network_name must be the name of the remote server as referenced in the
interfaces file local to your local server. It normally makes sense to make the
server_name the same as the network_name, but you can easily do:

sp_addserver LIVE, null, ACCTS_LIVE

When you execute for example, exec LIVE.master..sp_helpdb the local ASE will
translate LIVE to ACCTS_LIVE and try and talk to ACCTS_LIVE via the ACCTS_LIVE
entry in the local interfaces file.

Finally, a variation on the sp_addserver command:

sp_addserver LOCALSRVNAME, local

names the local server (after a restart). This is the name the server reports
in the errorlog at startup, the value returned by @@SERVERNAME, and the value
placed in Open Client server messages. It can be completely different from the
names in RUN_SRVNAME or in local or remote interfaces - it has no bearing on
connectivity matters.

Back to top


1.1.9: How do I interpret the tli strings in the interface file?


The tli string contained with Solaris interface files is a hex string
containing port and IP address. If you have an entry

master tli tcp /dev/tcp \x000204018196c4510000000000000000

Then it can be interpreted as follows:

x0002 no user interpretation (header info?)
0401 port number (1025 decimal)
81 first part of IP address (129 decimal)
96 second part of IP address (150 decimal)
c4 third part of IP address (196 decimal)
51 fourth part of IP address (81 decimal)

So, the above tli address is equivalent to

master tcp ether sybhost 1025

where sybhost's IP address is

The following piece of Sybperl (courtesy of Michael Peppler) takes a tli entry
and returns the IP address and port number for each server in a Solaris'
interfaces file.

#!/usr/local/bin/perl -w

use strict;

my $server;
my @dat;
my ($port, $ip);

while(<>) {
next if /^\s*$/;
next if /^\s*\#/;
if(/^\w/) {
$server = $_;
$server =~ s/\s*$//;

@dat = split(' ', $_);
($port, $ip) = parseAddress($dat[4]);
print "$server - $dat[0] on port $port, host $ip\n";

sub parseAddress {
my $addr = shift;

my $port;
my $ip;

my (@arr) = (hex(substr($addr, 10, 2)),

hex(substr($addr, 12, 2)),

hex(substr($addr, 14, 2)),

hex(substr($addr, 16, 2)));
$port = hex(substr($addr, 6, 4));
$ip = join('.', @arr);

($port, $ip);

Back to top


1.1.10: How can I tell the datetime my Server started?


Method #1

The normal way would be to look at the errorlog, but this is not always
convenient or even possible. From a SQL session you find out the server startup
time to within a few seconds using:

select "Server Start Time" = crdate
from master..sysdatabases
where name = "tempdb"

Method #2

Another useful query is:

select * from sysengines

which gives the address and port number at which the server is listening.

Back to top


1.1.11: Raw partitions or regular files?


Hmmm... as always, this answer depends on the vendor's implementation on a
cooked file system for the ASE...

Performance Hit (synchronous vs asynchronous)

If on this platform, the ASE performs file system I/O synchronously then the
ASE is blocked on the read/write and throughput is decreased tremendously.

The way the ASE typically works is that it will issue an I/O (read/write) and
save the I/O control block and continue to do other work (on behalf of other
connections). It'll periodically poll the workq's (network, I/O) and resume
connections when their work has completed (I/O completed, network data

Performance Hit (bcopy issue)

Assuming that the file system I/O is asynchronous (this can be done on SGI), a
performance hit may be realized when bcopy'ing the data from kernel space to
user space.

Cooked I/O typically (again, SGI has something called directed I/O which allows
I/O to go directly to user space) has to go from disk, to kernel buffers and
from kernel buffers to user space; on a read. The extra layer with the kernel
buffers is inherently slow. The data is moved from kernel buffers to/from user
space using bcopy(). On small operations this typically isn't that much of an
issue but in a RDBMS scenario the bcopy() layer is a significant performance
hit because it's done so often...

Performance Gain!

It's true, using file systems, at times you can get performance gains assuming
that the ASE on your platform does the I/O asynchronously (although there's a
caveat on this too... I'll cover that later on).

If your machine has sufficient memory and extra CPU capacity, you can realize
some gains by having writes return immediately because they're posted to
memory. Reads will gain from the anticipatory fetch algorithm employed by most

You'll need extra memory to house the kernel buffered data and you'll need
extra CPU capacity to allow bdflush() to write the dirty data out to disk...
eventually... but with everything there's a cost: extra memory and free CPU

One argument is that instead of giving the O/S the extra memory (by leaving it
free) to give it to the ASE and let it do its caching... but that's a different

Data Integrity and Cooked File System

If the Sybase ASE is not certified to be used over a cooked file system,
because of the nature of the kernel buffering (see the section above) you may
face database corruption by using cooked file system anyway. The ASE thinks
that it has posted its changes out to disk but in reality it has gone only to
memory. If the machine halts without bdflush() having a chance to flush memory
out to disk, your database may become corrupted.

Some O/S's allow cooked files to have a write through mode and it really
depends if the ASE has been certified on cooked file systems. If it has, it
means that when the ASE opens a device which is on a file system, it fcntl()'s
the device to write-through.

When to use cooked file system?

I typically build my tempdb on cooked file system and I don't worry about data
integrity because tempdb is rebuilt every time your ASE/SQL Server is rebooted.

Back to top


1.1.12: Is Sybase Y2K (Y2000) compliant?


Sybase is year 2000 compliant at specific revisions of each product. Full
details are available at, specifically (as these links
will undoubtedly change):

Note: Since we have made it to 2000 more or less intact, I see no reason to
include this question. I plan to remove with the next release of the FAQ. If
you feel strongly about leaving it in then let me know.

Back to top


1.1.13 How Can I Run the ASE Upgrade Manually?


How to Run the ASE Upgrade Manually

This document describes the steps required to perform a manual upgrade for ASE
from release 4.x or 10.0x to release 11.02. In most cases, however, you should
use sybinit to perform the upgrade.


1. Use release 11.0x sybinit to run the pre-eligibility test and Check
Reserved words. Make any necessary changes that are mentioned in the
sybinit log. The sybinit log is located in $SYBASE/init/logs/logxxxx.yyy.
2. Use isql to connect to the 4.x or 10.0x ASE and do the following tasks:
a. Turn on option to allow updates to system tables:
1> sp_configure "allow updates", 1
2> go

b. Checkpoint all databases:
1> use "dbname"
2> go
1> checkpoint
2> go

c. Shutdown the 4.x or 10.0x ASE.
1> shutdown
2> go
3. Copy the interfaces file to the release 11.0x directory.
4. Set the environment variable SYBASE to the release 11.0x directory.
5. Copy the runserver file to the release 11.0x $SYBASE/install directory.
6. Edit the $SYBASE/install/RUN_SYBASE (runserver file) to change the path
from the 4.x or 10.x dataserver directory to the new release 11.0x
7. Start ASE using the new runserver file.
% startserver -f$SYBASE/install/RUN_SYBASE
8. Run the upgrade program:

UNIX: $SYBASE/upgrade/upgrade -S"servername" -P"sapassword" > $SYBASE/init/
logs/mylog.log 2>&1 VMS: SYBASE_SYSTEM[SYBASE.UPGRADE]upgrade /password=
"sa_password" /servername="servername"

9. Shut down SQL server after a successful upgrade.
% isql -Usa -Pxxx
1> shutdown
2> go
10. Start ASE using the release 11.0x runserver file.

% startserver -f$SYBASE/install/RUN_SYBASE

11. Create the sybsystemprocs device and database if upgrading from 4.9.x. You
should create a 21mb sybsystemprocs device and database.
a. Use the disk init command to create the sybsytemprocs device and
database manually, for example:

disk init name = "sybprocsdev", physname="/dev/sybase/rel1102/
sybsystemprocs.dat", vdevno=4, size=10752 go To check to see which vdevno
is available: type 1> select distinct low/16777216 from sysdevices 2> order
by low 3> go A sample create database command: create database
sybsystemprocs on sybprocsdev=21 go Please refer to the "Sybase ASE
Reference Manual", for more information on these commands.

12. Run the installmaster and installmodel scripts:
UNIX: %isql -Usa -Psapassword -i$SYBASE/scripts/installmaster
UNIX: %isql -Usa -Psapassword -i$SYBASE/scripts/installmodel
VMS: $isql /user="sa" /password="sapass"
/input="[sybase_system.scripts]installm aster"
VMS: $isql /user="sa" /password="sapass"
/input="[sybase_system.scripts]installm odel"
13. If you upgraded from ASE 4.9.2, you will need to run sp_remap to remap the
compiled objects. Sp_remap remaps stored procedures, triggers, rules,
defaults, or views to be compatible with the current release of ASE. Please
refer to the Reference Manual Volume II for more information on the
sp_remap command.

The syntax for sp_remap:

sp_remap object_name

If you are upgrading to ASE 11.0.x and the upgrade process failed when using
sybinit, you can invoke sybinit and choose remap query tress from the upgrade
menu screen. This is a new option that is added, after a failed upgrade.

Back to top


1.1.14 We have lost the sa password, what can we do?


Remember Douglas Adams famous quote "Don't panic" is the first thing!

I know that most people use the 'sa' account all of the time, which is fine if
there is only ever one dba administering the system. If you have more than one
person accessing the server using the 'sa' account, consider using sa_role
enabled accounts and disabling the 'sa' account. Funnily enough, this is
obviously what Sybase think because it is one of the questions in the
certification exams.

If you see that someone is logged using the 'sa' account or is using an account
with 'sa_role' enabled, then you can do the following:

sp_configure "allow updates to system tables",1
update syslogins set password=null where name = 'sa'
sp_password null,newPassword

You must rememeber to reset the password before exiting isql or sqsh. I thought
that setting it to null would be enough, and exited isql thinking that I would
be able to get in with a null password. Take it from me that the risk is not
worth it. It failed for me and I had to kill the dataserver and get a new
password. I just tried the above method and it works fine.

If you have a user with sso_role enabled, login with that account and change
the 'sa' password that way. It is often a good idea to have a separate site
security officer, just to get you out of this sticky situation. Certainly stops
you looking an idiot in managements eyes for having to reboot production
because you have locked yourself out!

OK, so we have got to the point where there are no accounts with sufficient
priviledges to allow you to change the 'sa' account password. (You are sure
about that, since the next part can cause data loss, so have another quick
look.) We now need to some more drastic stuff.

If the server is actually running, then you need to stop it. We know that the
only accounts that can stop the server in a nice manner are not available, so
it has to be some sort of kill. You can try:



kill -15

(they are identical) which is designed to be caught by ASE, which then performs
the equivalent of shutdown with nowait. If ASE does not die, and you should
give it a little while to catch and act on the signal, then you might have to
try other measures, which is probably kill -9. Note that if you have tables
with identity columns, most of these will jump alarmingly, unless you are using
ASE 12.5 and the identity interval is set to 1.

Once down, edit the RUN_SERVER file ( RUN_SERVER.bat on NT) and add "-psa" (it
is important not to leave a space between the"-p" and the "sa", and that it is
all lower-case) to the end of the dataserver or sqlsrvr.exe line. You will end
up with a file that looks a bit like:

# Adaptive Server name: N_UTSIRE
# Master device path: /data/sybase/databases/N_UTSIRE/master.dat
# Error log path: /opt/sybase-11.9.2/install/N_UTSIRE.log
# Directory for shared memory files: /opt/sybase-11.9.2
# Regenerate sa password -psa
/opt/sybase-11.9.2/bin/dataserver \
-d/data/sybase/databases/N_UTSIRE/master.dat \
-e/opt/sybase-11.9.2/install/N_UTSIRE.log \
-M/opt/sybase-11.9.2 -psa \

(I add the line mentioning the regenerate, so that if I need to do this in a
moment of extreme pressure it is there in front of my nose.

Now, start the server again and you should see the following on the screen:

00:00000:00001:2001/05/26 18:29:21.39 server 'bin_iso_1' (ID = 50)
00:00000:00001:2001/05/26 18:29:21.39 server on top of default character set:
00:00000:00001:2001/05/26 18:29:21.39 server 'iso_1' (ID = 1).
00:00000:00001:2001/05/26 18:29:21.39 server Loaded default Unilib conversion handle.

New SSO password for sa:tmfyrkdwpibung

Note that it is not written to the log file, so keep your eyes peeled.

On NT you will have to start the server from the command line and not use
Sybase Central or the control panel.

Obviously, you will want to change the password to something much more
memorable as soon as possible.

Remember to remove the "-psa" from the "RUN" file before you start the server
again or else the password will be changed again for you.

Back to top


1.1.15 How do I set a password to be null?


Since ASE 11 (I cannot remember if it was with the very first release of 11,
but certainly not before) the password column in syslogins has been encrypted.
Setting this column to NULL does not equate to that login having a NULL
password. A NULL password still requires the correct binary string to be in

In release 12 and above, set the minimum password length to be 0 using
sp_configure and give that account a null password, and all should be fine.

Before 12, it is not possible to set the minimum password length, so the direct
approach is not possible. So, update the relevant record in syslogins setting
the password column to be the same as that of an account with a NULL password

How does one get the correct binary value? When a new ASE is built, the 'sa'
account has a NULL password to start with. Setting an account to have the same
binary value as such an 'sa' account should work. Remember that the binary
string is going to be specific to the operating system and the exact release of
ASE etc. Obviously, if you have set the password of your 'sa' accounts to be
something other than NULL (sensible move), then you are going to have to build
yourself a dummy server just to get the correct string. If this is important to
you, then you may wish to store the value somewhere safe once you have
generated it.

Yet another method would be to simply insert the correct hex string into the
password column. Rob Verschoor has a very nice stored proc on his site called
sp_blank_password to allow you to do just this. Go to
blankpwd.html .

Back to top


1.1.16: Does Sybase support Row Level Locking?


With Adaptive Server Enterprise 11.9 Sybase introduced row level locking into
its product. In fact it went further than that, it introduced 3 different
locking levels:

* All Pages Locking

This is the scheme that is implemented in all servers prior to 11.9. Here
locks are taken out at the page level, which may included many rows. The
name refers to the fact that all of the pages in any data manipulation
statement are locked, both data and index.

* Data Page Locking

The other two locking schemes are bundled together under the title Data
Page Locking, refering to the fact that only data pages are ever locked in
the conventional sense. Data Page Locking is divided into two categories
+ Data Only Locking

This locking scheme still locks a page at a time, including all of the
rows contained within that page, but uses a new mechanism, called
latches, to lock index pages for the shortest amount of time. One of
the consequences of this scheme is that it does not update index
pages. In order to support this Sybase has introduced a new concept,
forwarded rows. These are rows that have had to move because they have
grown beyond space allowed for them on the page they were created. 2002
bytes per page.

+ Row Level Locking

Just as it sounds, the lock manager only locks the row involved in the

Back to top


1.1.17: What platforms does ASE run on?


Sybase has an excellent lookup page that tells you all of the releases that
Sybase has certifies as running on a particular platform. Got to http:// .

Back to top


1.1.18: How do I backup databases > 64G on ASE prior to 12.x?


As you are all well aware, prior to version of ASE 12, dumping large databases
was a real pain. Tape was the only option for anything greater than 64 gig.
This was because only 32 dump devices, or stripes, were supported, and since
file based stripes were restricted to no more than 2 gig, the total amount of
data that could be dumped was <= 32 * 2 = 64G.

With the introduction of ASE 12, the number of stripes was increased

Back to top


User Database Administration # ASE FAQ

David Owen

Apr 20, 2004, 9:45:05 AM4/20/04
Archive-name: databases/sybase-faq/part7

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

General Troubleshooting

1. How do I turn off marked suspect on my database?
2. On startup, the transaction log of a database has filled and recovery has
suspended, what can I do?
3. Why do my page locks not get escalated to a table lock after 200 locks?

Performance and Tuning Advanced Administration ASE FAQ


1.4.1 How do I turn off marked suspect on my database?


Say one of your database is marked suspect as the SQL Server is coming up. Here
are the steps to take to unset the flag.

Remember to fix the problem that caused the database to be marked suspect
after switching the flag.

System 11

1. sp_configure "allow updates", 1
2. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") --
save this value.
3. begin transaction
4. update sysdatabases set status = -32768 where dbid = db_id("my_hosed_db")
5. commit transaction
6. shutdown
7. startserver -f RUN_*
8. fix the problem that caused the database to be marked suspect
9. begin transaction
10. update sysdatabases set status = saved_value where dbid = db_id
11. commit transaction
12. sp_configure "allow updates", 0
13. reconfigure
14. shutdown
15. startserver -f RUN_*

System 10

1. sp_configure "allow updates", 1
2. reconfigure with override
3. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") -
save this value.
4. begin transaction
5. update sysdatabases set status = -32768 where dbid = db_id("my_hosed_db")
6. commit transaction
7. shutdown
8. startserver -f RUN_*
9. fix the problem that caused the database to be marked suspect
10. begin transaction
11. update sysdatabases set status = saved_value where dbid = db_id
12. commit transaction
13. sp_configure "allow updates", 0
14. reconfigure
15. shutdown
16. startserver -f RUN_*

Pre System 10

1. sp_configure "allow updates", 1
2. reconfigure with override
3. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") -
save this value.
4. begin transaction
5. update sysdatabases set status = -32767 where dbid = db_id("my_hosed_db")
6. commit transaction
7. you should be able to access the database for it to be cleared out. If not:
1. shutdown
2. startserver -f RUN_*
8. fix the problem that caused the database to be marked suspect
9. begin transaction
10. update sysdatabases set status = saved_value where dbid = db_id
11. commit transaction
12. sp_configure "allow updates", 0
13. reconfigure

Return to top


1.4.2 On startup, the transaction log of a database has filled and recovery has

suspended, what can I do?


You might find the following in the error log:

00:00000:00001:2000/01/04 07:43:42.68 server Can't allocate space for object
'syslogs' in database 'DBbad' because 'logsegment' segment is full/has no free
extents. If you ran out of space in syslogs, dump the transaction log.
Otherwise, use ALTER DATABASE or sp_extendsegment to increase size of the
00:00000:00001:2000/01/04 07:43:42.68 server Error: 3475, Severity: 21, State:
00:00000:00001:2000/01/04 07:43:42.68 server There is no space available in
SYSLOGS for process 1 to log a record for which space has been reserved. This
process will retry at intervals of one minute. The internal error number is -4.

which can prevent ASE from starting properly. A neat solution from Sean Kiely
( of Sybase Technical Support, that works if the database
has any "data only" segments. Obviously this method does not apply to the
master database. The Sybase Trouble Shooting Guide has very good coverage of
recovering the master database.

1. You will have to bring the server up with trace flag 3608 to prevent the
recovery of the user databases.
2. sp_configure "allow updates",1
3. Write down the segmap entries from the sysusages table for the toasted
4. update sysusages
set segmap = 7
where dbid = db_id("my_toasted_db")
and segmap = 3
5. select status - 320
from sysdatabases
where dbid = db_id("my_toasted_db") -- save this value.
begin transaction
update sysdatabases set status = -32768 where dbid = db_id("my_toasted_db")
go -- if all is OK, then...
commit transaction
6. Restart the server without the trace flag. With luck it should now have
enough space to recover. If it doesn't, you are in deeper trouble than
before, you do have a good, recent backup don't you?
7. dump database my_toasted_db with truncate_only
8. Reset the segmap entries in sysusages to be those as saved in 3. above.
9. Shutdown ASE and restart. (The traceflag should have gone at step 6., but
ensure that it is not there!)

Return to top


1.4.3: Why do my page locks not get escalated to a table lock after 200 locks?


Several reasons why this may be happening.

* Are you doing the updates from within a cursor?

The lock promotion only happens if you are attempting to take out 200 locks
in a single operation ie a single insert, update or delete. If you
continually loop over a table using a cursor, locking one row at time, the
lock promotion never fires. Either use an explicit mechanism to lock the
whole table, if that is required, or remove the cursor replacing it with an
appropriate join.

* A single operation is failing to escalate?

Even if you are performing a single insert, update or delete, Sybase only
attempts to lock the whole table when the lock escalation point is
reached. If this attempt fails because there is another lock which
prevents the escalation, the attempt is aborted and individual page locking

Return to top


Performance and Tuning Advanced Administration ASE FAQ

David Owen

Apr 20, 2004, 9:45:04 AM4/20/04
Archive-name: databases/sybase-faq/part5

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

User Database Administration

1.2.1 Changing varchar(m) to varchar(n)
1.2.2 Frequently asked questions on Table partitioning
1.2.3 How do I manually drop a table?
1.2.4 Why not create all my columns varchar(255)?
1.2.5 What's a good example of a transaction?
1.2.6 What's a natural key?
1.2.7 Making a Stored Procedure invisible
1.2.8 Saving space when inserting rows monotonically
1.2.9 How to compute database fragmentation
1.2.10 Tasks a DBA should do...
1.2.11 How to implement database security
1.2.12 How to shrink a database
1.2.13 How do I turn on auditing of all SQL text sent to the server
1.2.14 sp_helpdb/sp_helpsegment is returning negative numbers

Advanced Administration Basic Administration ASE FAQ


1.2.1: Changing varchar(m) to varchar(n)


Before you start:

select max(datalength(column_name))
from affected_table

In other words, please be sure you're going into this with your head on

How To Change System Catalogs

This information is Critical To The Defense Of The Free World, and you would be
Well Advised To Do It Exactly As Specified:

use master

sp_configure "allow updates", 1

reconfigure with override /* System 10 and below */
use victim_database
select name, colid
from syscolumns
where id = object_id("affected_table")
begin tran
update syscolumns
set length = new_value
where id = object_id("affected_table")
and colid = value_from_above
update sysindexes
set maxlen = maxlen + increase/decrease?
where id=object_id("affected_table")
and indid = 0
/* check results... cool? Continue... else rollback tran */
commit tran
use master

sp_configure "allow updates", 0

reconfigure /* System 10 and below */

Return to top


1.2.2: FAQ on partitioning


Index of Sections

* What Is Table Partitioning?
+ Page Contention for Inserts
+ I/O Contention
+ Caveats Regarding I/O Contention
* Can I Partition Any Table?
+ How Do I Choose Which Tables To Partition?
* Does Table Partitioning Require User-Defined Segments?
* Can I Run Any Transact-SQL Command on a Partitioned Table?
* How Does Partition Assignment Relate to Transactions?
* Can Two Tasks Be Assigned to the Same Partition?
* Must I Use Multiple Devices to Take Advantage of Partitions?
* How Do I Create A Partitioned Table That Spans Multiple Devices?
* How Do I Take Advantage of Table Partitioning with bcp in?
* Getting More Information on Table Partitioning

What Is Table Partitioning?

Table partitioning is a procedure that creates multiple page chains for a
single table.

The primary purpose of table partitioning is to improve the performance of
concurrent inserts to a table by reducing contention for the last page of a
page chain.

Partitioning can also potentially improve performance by making it possible to
distribute a table's I/O over multiple database devices.

Page Contention for Inserts

By default, ASE stores a table's data in one double-linked set of pages called
a page chain. If the table does not have a clustered index, ASE makes all
inserts to the table in the last page of the page chain.

When a transaction inserts a row into a table, ASE holds an exclusive page lock
on the last page while it inserts the row. If the current last page becomes
full, ASE allocates and links a new last page.

As multiple transactions attempt to insert data into the table at the same
time, performance problems can occur. Only one transaction at a time can obtain
an exclusive lock on the last page, so other concurrent insert transactions
block each other.

Partitioning a table creates multiple page chains (partitions) for the table
and, therefore, multiple last pages for insert operations. A partitioned table
has as many page chains and last pages as it has partitions.

I/O Contention

Partitioning a table can improve I/O contention when ASE writes information in
the cache to disk. If a table's segment spans several physical disks, ASE
distributes the table's partitions across fragments on those disks when you
create the partitions.

A fragment is a piece of disk on which a particular database is assigned space.
Multiple fragments can sit on one disk or be spread across multiple disks.

When ASE flushes pages to disk and your fragments are spread across different
disks, I/Os assigned to different physical disks can occur in parallel.

To improve I/O performance for partitioned tables, you must ensure that the
segment containing the partitioned table is composed of fragments spread across
multiple physical devices.

Caveats Regarding I/O Contention

Be aware that when you use partitioning to balance I/O you run the risk of
disrupting load balancing even as you are trying to achieve it. The following
scenarios can keep you from gaining the load balancing benefits you want:

* You are partitioning an existing table. The existing data could be sitting
on any fragment. Because partitions are randomly assigned, you run the risk
of filling up a fragment. The partition will then steal space from other
fragments, thereby disrupting load balancing.
* Your fragments differ in size.
* The segment maps are configured such that other objects are using the
fragments to which the partitions are assigned.
* A very large bcp job inserts many rows within a single transaction. Because
a partition is assigned for the lifetime of a transaction, a huge amount of
data could go to one particular partition, thus filling up the fragment to
which that partition is assigned.

Can I Partition Any Table?

No. You cannot partition the following kinds of tables:

1. Tables with clustered indexes (as of release 11.5 it is possible to have a
clustered index on a partitioned table)
2. ASE system tables
3. Work tables
4. Temporary tables
5. Tables that are already partitioned. However, you can unpartition and then
re-partition tables to change the number of partitions.

How Do I Choose Which Tables To Partition?

You should partition heap tables that have large amounts of concurrent insert
activity. (A heap table is a table with no clustered index.) Here are some

1. An "append-only" table to which every transaction must write
2. Tables that provide a history or audit list of activities
3. A new table into which you load data with bcp in. Once the data is loaded
in, you can unpartition the table. This enables you to create a clustered
index on the table, or issue other commands not permitted on a partition

Does Table Partitioning Require User-Defined Segments?

No. By design, each table is intrinsically assigned to one segment, called the
default segment. When a table is partitioned, any partitions on that table are
distributed among the devices assigned to the default segment.

In the example under "How Do I Create A Partitioned Table That Spans Multiple
Devices?", the table sits on a user-defined segment that spans three devices.

Can I Run Any Transact-SQL Command on a Partitioned Table?

No. Once you have partitioned a table, you cannot use any of the following
Transact-SQL commands on the table until you unpartition it:

1. drop table
2. sp_placeobject
3. truncate table
4. alter table table_name partition n

On releases of ASE prior to 11.5 it was not possible to create a clustered
index on a partitioned table either.

How Does Partition Assignment Relate to Transactions?

A user is assigned to a partition for the duration of a transaction. Assignment
of partitions resumes with the first insert in a new transaction. The user
holds the lock, and therefore partition, until the transaction ends.

For this reason, if you are inserting a great deal of data, you should batch it
into separate jobs, each within its own transaction. See "How Do I Take
Advantage of Table Partitioning with bcp in?", for details.

Can Two Tasks Be Assigned to the Same Partition?

Yes. ASE randomly assigns partitions. This means there is always a chance that
two users will vie for the same partition when attempting to insert and one
would lock the other out.

The more partitions a table has, the lower the probability of users trying to
write to the same partition at the same time.

Must I Use Multiple Devices to Take Advantage of Partitions?

It depends on which type of performance improvement you want.

Table partitioning improves performance in two ways: primarily, by decreasing
page contention for inserts and, secondarily, by decreasing i/o contention.
"What Is Table Partitioning?" explains each in detail.

If you want to decrease page contention you do not need multiple devices. If
you want to decrease i/o contention, you must use multiple devices.

How Do I Create A Partitioned Table That Spans Multiple Devices?

Creating a partitioned table that spans multiple devices is a multi-step
procedure. In this example, we assume the following:

* We want to create a new segment rather than using the default segment.
* We want to spread the partitioned table across three devices, data_dev1,
data_dev2, and data_dev3.

Here are the steps:

1. Define a segment:

sp_addsegment newsegment, my_database,data_dev1

2. Extend the segment across all three devices:

sp_extendsegment newsegment, my_database, data_dev2
sp_extendsegment newsegment, my_database, data_dev3

3. Create the table on the segment:

create table my_table
(names, varchar(80) not null)
on newsegment

4. Partition the table:

alter table my_table partition 30

How Do I Take Advantage of Table Partitioning with bcp in?

You can take advantage of table partitioning with bcp in by following these

1. Break up the data file into multiple files and simultaneously run each of
these files as a separate bcp job against one table.

Running simultaneous jobs increases throughput.

2. Choose a number of partitions greater than the number of bcp jobs.

Having more partitions than processes (jobs) decreases the probability of
page lock contention.

3. Use the batch option of bcp in. For example, after every 100 rows, force a
commit. Here is the syntax of this command:

bcp table_name in filename -b100

Each time a transaction commits, ASE randomly assigns a new partition for
the next insert. This, in turn, reduces the probability of page lock

Getting More Information on Table Partitioning

For more information on table partitioning, see the chapter on controlling
physical data placement in the ASE Performance and Tuning Guide.

Return to top


1.2.3: How to manually drop a table


Occasionally you may find that after issuing a drop table command that the ASE
crashed and consequently the table didn't drop entirely. Sure you can't see it
but that sucker is still floating around somewhere.

Here's a list of instructions to follow when trying to drop a corrupt table:

1. sp_configure allow, 1
reconfigure with override

2. Write db_id down.
use db_name
select db_id()
3. Write down the id of the bad_table:
select id
from sysobjects
where name = bad_table_name
4. You will need these index IDs to run dbcc extentzap. Also, remember that if
the table has a clustered index you will need to run extentzap on index
"0", even though there is no sysindexes entry for that indid.
select indid
from sysindexes
where id = table_id
5. This is not required but a good idea:
begin transaction
6. Type in this short script, this gets rid of all system catalog information
for the object, including any object and procedure dependencies that may be

Some of the entries are unnecessary but better safe than sorry.

declare @obj int
select @obj = id from sysobjects where name =
delete syscolumns where id = @obj
delete sysindexes where id = @obj
delete sysobjects where id = @obj
delete sysprocedures where id in
(select id from sysdepends where depid = @obj)
delete sysdepends where depid = @obj
delete syskeys where id = @obj
delete syskeys where depid = @obj
delete sysprotects where id = @obj
delete sysconstraints where tableid = @obj
delete sysreferences where tableid = @obj
delete sysdepends where id = @obj
7. Just do it!
commit transaction
8. Gather information to run dbcc extentzap:
use master
sp_dboption db_name, read, true
use db_name
9. Run dbcc extentzap once for each index (including index 0, the data level)
that you got from above:
use master
dbcc traceon (3604)
dbcc extentzap (db_id, obj_id, indx_id, 0)
dbcc extentzap (db_id, obj_id, indx_id, 1)

Notice that extentzap runs twice for each index. This is because the
last parameter (the sort bit) might be 0 or 1 for each index, and you
want to be absolutely sure you clean them all out.

10. Clean up after yourself.
sp_dboption db_name, read, false
use db_name
sp_configure allow, 0
reconfigure with override

Return to top


1.2.4: Why not max out all my columns?


People occasionally ask the following valid question:

Suppose I have varying lengths of character strings none of which should
exceed 50 characters.

Is there any advantage of last_name varchar(50) over this last_name varchar

That is, for simplicity, can I just define all my varying strings to be
varchar(255) without even thinking about how long they may actually be? Is
there any storage or performance penalty for this.

There is no performance penalty by doing this but as another netter pointed

If you want to define indexes on these fields, then you should specify the
smallest size because the sum of the maximal lengths of the fields in the
index can't be greater than 256 bytes.

and someone else wrote in saying:

Your data structures should match the business requirements. This way the
data structure themselves becomes a data dictionary for others to model
their applications (report generation and the like).

Return to top


1.2.5: What's a good example of a transaction?


This answer is geared for Online Transaction Processing (OTLP)

To gain maximum throughput all your transactions should be in stored procedures
- see Q1.5.8. The transactions within each stored procedure should be short and
simple. All validation should be done outside of the transaction and only the
modification to the database should be done within the transaction. Also, don't
forget to name the transaction for sp_whodo - see Q9.2.

The following is an example of a good transaction:

/* perform validation */
select ...
if ... /* error */
/* give error message */
else /* proceed */
begin transaction acct_addition
update ...
insert ...
commit transaction acct_addition

The following is an example of a bad transaction:

begin transaction poor_us
update X ...
select ...
if ... /* error */
/* give error message */
else /* proceed */
update ...
insert ...
commit transaction poor_us

This is bad because:

* the first update on table X is held throughout the transaction. The idea
with OLTP is to get in and out fast.
* If an error message is presented to the end user and we await their
response, we'll maintain the lock on table X until the user presses return.
If the user is out in the can we can wait for hours.

Return to top


1.2.6: What's a natural key?


Let me think back to my database class... okay, I can't think that far so I'll
paraphrase... essentially, a natural key is a key for a given table that
uniquely identifies the row. It's natural in the sense that it follows the
business or real world need.

For example, assume that social security numbers are unique (I believe it is
strived to be unique but it's not always the case), then if you had the
following employee table:


ssn char(09)
f_name char(20)
l_name char(20)
title char(03)

Then a natural key would be ssn. If the combination of _name and l_name were
unique at this company, then another natural key would be f_name, l_name. As a
matter of fact, you can have many natural keys in a given table but in practice
what one does is build a surrogate (or artificial) key.

The surrogate key is guaranteed to be unique because (wait, get back, here it
goes again) it's typically a monotonically increasing value. Okay, my
mathematician wife would be proud of me... really all it means is that the key
is increasing linearly: i+1

The reason one uses a surrogate key is because your joins will be faster.

If we extended our employee table to have a surrogate key:


id identity
ssn char(09)
f_name char(20)
l_name char(20)
title char(03)

Then instead of doing the following:

where a.f_name = b.f_name
and a.l_name = a.l_name

we'd do this:

where =

We can build indexes on these keys and since Sybase's atomic storage unit is
2K, we can stash more values per 2K page with smaller indexes thus giving us
better performance (imagine the key being 40 bytes versus being say 4 bytes...
how many 40 byte values can you stash in a 2K page versus a 4 byte value? --
and how much wood could a wood chuck chuck, if a wood chuck could chuck wood?)

Does it have anything to do with natural joins?

Um, not really... from "A Guide to Sybase..", McGovern and Date, p. 112:

The equi-join by definition must produce a result containing two identical
columns. If one of those two columns is eliminated, what is left is called
the natural join.

Return to top


1.2.7: Making a Stored Procedure invisible


System 11.5 and above

It is now possible to encrypt your stored procedure code that is stored in the
syscomments table. This is preferred than the old method of deleting the data
as deleting will impact future upgrades. You can encrypt the text with the
sp_hidetext system procedure.

Pre-System 11.5

Perhaps you are trying to prevent the buyer of your software from defncopy'ing
all your stored procedures. It is perfectly safe to delete the syscomments
entries of any stored procedures you'd like to protect:

sp_configure "allow updates", 1

reconfigure with override /* System 10 and below */
use affected_database
delete syscomments where id = object_id("procedure_name")
use master

sp_configure "allow updates", 0


I believe in future releases of Sybase we'll be able to see the SQL that is
being executed. I don't know if that would be simply the stored procedure name
or the SQL itself.

Return to top


1.2.8: Saving space when inserting rows monotonically


If the columns that comprise the clustered index are monotonically increasing
(that is, new row key values are greater than those previously inserted) the
following System 11 dbcc tune will not split the page when it's half way full.
Rather it'll let the page fill and then allocate another page:

dbcc tune(ascinserts, 1, "my_table")

By the way, SyBooks is wrong when it states that the above needs to be reset
when ASE is rebooted. This is a permanent setting.

To undo it:

dbcc tune(ascinserts, 0, "my_table")

Return to top


1.2.9: How to compute database fragmentation



dbcc traceon(3604)
dbcc tab(production, my_table, 0)


A delta of one means the next page is on the same track, two is a short seek,
three is a long seek. You can play with these constants but they aren't that

A table I thought was unfragmented had L1 = 1.2 L2 = 1.8

A table I thought was fragmented had L1 = 2.4 L2 = 6.6

How to Fix

You fix a fragmented table with clustered index by dropping and creating the
index. This measurement isn't the correct one for tables without clustered
indexes. If your table doesn't have a clustered index, create a dummy one and
drop it.

Return to top


1.2.10: Tasks a DBA should do...


A good presentation of a DBA's duties has been made available by Jeff Garbus ( of Soaring Eagle Consulting Ltd (http:// and numerous books can be found here. These are
Powerpoint slides converted to web pages and so may be difficult to view with a
text browser!

An alternative view is catalogued below. (OK, so this list is crying out for a
bit of a revamp since checkstorage came along Ed!)

DBA Tasks
| Task | Reason | Period |
| | I consider | If your ASE permits, daily |
| | these the | before your database dumps. If |
| dbcc checkdb, | minimal | this is not possible due to |
| checkcatalog, | dbcc's to | the size of your databases, |
| checkalloc | ensure the | then try the different options |
| | integrity of | so that the end of, say, a |
| | your database | week, you've run them all. |
| Disaster recovery | Always be | |
| scripts - scripts to | prepared for | |
| rebuild your ASE in | the worst. | |
| case of hardware | Make sure to | |
| failure | test them. | |
| scripts to logically | | |
| dump your master | You can | |
| database, that is bcp | selectively | |
| the critical system | rebuild your | |
| tables: sysdatabases, | database in | Daily |
| sysdevices, syslogins, | case of | |
| sysservers, sysusers, | hardware | |
| syssegments, | failure | |
| sysremotelogins | | |
| | A system | |
| | upgrade is | After any change as well as |
| %ls -la <disk_devices> | known to | daily |
| | change the | |
| | permissions. | |
| dump the user | CYA* | Daily |
| databases | | |
| dump the transaction | CYA | Daily |
| logs | | |
| dump the master | CYA | After any change as well as |
| database | | daily |
| | This is the | |
| System 11 and beyond - | configuration | |
| save the $DSQUERY.cfg | that you've | After any change as well as |
| to tape | dialed in, | daily |
| | why redo the | |
| | work? | |
| | | Depending on how often your |
| | | major tables change. Some |
| | | tables are pretty much static |
| | | (e.g. lookup tables) so they |
| update statistics on | To ensure the | don't need an update |
| frequently changed | performance | statistics, other tables |
| tables and | of your ASE | suffer severe trauma (e.g. |
| sp_recompile | | massive updates/deletes/ |
| | | inserts) so an update stats |
| | | needs to be run either nightly |
| | | /weekly/monthly. This should |
| | | be done using cronjobs. |
| create a dummy ASE and | | |
| do bad things to it: | See disaster | When time permits |
| delete devices, | recovery! | |
| destroy permissions... | | |
| Talk to the | It's better | |
| application | to work with | As time permits. |
| developers. | them than | |
| | against them. | |
| Learn new tools | So you can | As time permits. |
| | sleep! | |
| Read | Passes the | Priority One! |
| comp.databases.sybase | time. | |

* Cover Your Ass

Return to top


1.2.11: How to implement database security


This is a brief run-down of the features and ideas you can use to implement
database security:

Logins, Roles, Users, Aliases and Groups

* sp_addlogin - Creating a login adds a basic authorisation for an account -
a username and password - to connect to the server. By default, no access
is granted to any individual databases.
* sp_adduser - A user is the addition of an account to a specific database.
* sp_addalias - An alias is a method of allowing an account to use a specific
database by impersonating an existing database user or owner.
* sp_addgroup - Groups are collections of users at the database level. Users
can be added to groups via the sp_adduser command.

A user can belong to only one group - a serious limitation that Sybase
might be addressing soon according to the ISUG enhancements requests.
Permissions on objects can be granted or revoked to or from users or

* sp_role - A role is a high-level Sybase authorisation to act in a specific
capacity for administration purposes. Refer to the Sybase documentation for


Make sure there is a unique login account for each physical person and/or
process that uses the server. Creating generic logins used by many people or
processes is a bad idea - there is a loss of accountability and it makes it
difficult to track which particular person is causing server problems when
looking at the output of sp_who. Note that the output of sp_who gives a
hostname - properly coded applications will set this value to something
meaningful (ie. the machine name the client application is running from) so you
can see where users are running their programs. Note also that if you look at
master..sysprocesses rather than just sp_who, there is also a program_name.
Again, properly coded applications will set this (eg. to 'isql') so you can see
which application is running. If you're coding your own client applications,
make sure you set hostname and program_name via the appropriate Open Client
calls. One imaginative use I've seen of the program_name setting is to
incorporate the connection time into the name, eg APPNAME-DDHHMM (you have 16
characters to play with), as there's no method of determining this otherwise.

Set up groups, and add your users to them. It is much easier to manage an
object permissions system in this way. If all your permissions are set to
groups, then adding a user to the group ensures that users automatically
inherit the correct permissions - administration is *much* simpler.

Objects and Permissions

Access to database objects is defined by granting and/or revoking various
access rights to and from users or groups. Refer to the Sybase documentation
for details.


The ideal setup has all database objects being owned by the dbo, meaning no
ordinary users have any default access at all. Specific permissions users
require to access the database are granted explicitly. As mentioned above - set
permissions for objects to a group and add users to that group. Any new user
added to the database via the group then automatically obtains the correct set
of permissions.

Preferably, no access is granted at all to data tables, and all read and write
activity is accomplished through stored procedures that users have execute
permission on. The benefit of this from a security point of view is that access
can be rigidly controlled with reference to the data being manipulated, user
clearance levels, time of day, and anything else that can be programmed via
T-SQL. The other benefits of using stored procedures are well known (see Q1.5.8
). Obviously whether you can implement this depends on the nature of your
application, but the vast majority of in-house-developed applications can rely
solely on stored procedures to carry out all the work necessary. The only
server-side restriction on this method is the current inability of stored
procedures to adequately handle text and image datatypes (see Q1.5.12). To get
around this views can be created that expose only the necessary columns to
direct read or write access.


Views can be a useful general security feature. Where stored procedures are
inappropriate views can be used to control access to tables to a lesser extent.
They also have a role in defining row-level security - eg. the underlying table
can have a security status column joined to a user authorisation level table in
the view so that users can only see data they are cleared for. Obviously they
can also be used to implement column-level security by screening out sensitive
columns from a table.


Triggers can be used to implement further levels of security - they could be
viewed as a last line of defence in being able to rollback unauthorised write
activity (they cannot be used to implement any read security). However, there
is a strong argument that triggers should be restricted to doing what they were
designed for - implementing referential integrity - rather being loaded up with
application logic.

Administrative Roles

With Sybase version 10 came the ability to grant certain administrative roles
to user accounts. Accounts can have sa-level privilege, or be restricted to
security or operator roles - see sp_role.


The use of any generic account is not a good idea. If more than one person
requires access as sa to a server, then it is more accountable and traceable if
they each have an individual account with sa_role granted.

Return to top


1.2.12: How to Shrink a Database


Warning: This document has not been reviewed. Treat it as alpha-test
quality information and report any problems and suggestions to

It has historically been difficult to shrink any database except tempdb
(because it is created fresh every boot time). The two methods commonly used
have been:

1. Ensure that you have scripts for all your objects (some tools like SA
Companion, DB Artisan or from Sybperl can create scripts from
an existing database), then bcp out your data, drop the database, recreate
it smaller, run your scripts, and bcp in your data.
2. Use a third-party tool such as DataTool's SQL Backtrack, which in essence
automates the first process.

This technote outlines a third possibility that can work in most cases.

An Unsupported Method to Shrink a Database

This process is fairly trivial in some cases, such as removing a recently added
fragment or trimming a database that has a log fragment as its final
allocation, but can also be much more complicated or time consuming than the
script and bcp method.

General Outline

The general outline of how to do it is:

1. Make a backup of the current database
2. Migrate data from sysusages fragments with high lstart values to fragments
with low lstart values.
3. Edit sysusages to remove high lstart fragments that no longer have data
4. Reboot ASE.


1. Dump your database. If anything goes wrong, you will need to recover from
this backup!
2. Decide how many megabytes of space you wish to remove from your database.
3. Examine sysusages for the database. You will be shrinking the database by
removing the fragments with the highest lstart values. If the current
fragments are not of appropriate sizes, you may need to drop the database,
recreate it so there are more fragments, and reload the dump.

A trivial case: An example of a time when you can easily shrink a
database is if you have just altered it and are sure there has been no
activity on the new fragment. In this case, you can directly delete the
last row in sysusages for the db (this row was just added by alter db)
and reboot the server and it should come up cleanly.

4. Change the segmaps of the fragments you plan to remove to 0. This will
prevent future data allocations to these fragments.

Note: If any of the fragments you are using have user defined segments
on them, drop those segments before doing this.

sp_configure "allow updates", 1

reconfigure with override -- not necessary in System 11
update sysusages
set segmap = 0
where dbid = <dbid>
and lstart = <lstart>
dbcc dbrepair(<dbname>, remap)

Ensure that there is at least one data (segmap 3) and one log (segmap 4)
fragment, or one mixed (segmap 7) fragment.

If the server has been in use for some time, you can shrink it by deleting
rows from sysusages for the db, last rows first, after making sure that no
objects have any allocations on the usages.

5. Determine which objects are on the fragments you plan to remove.
dbcc usedextents( dbid,0,0,1)

Find the extent with the same value as the lstart of the first fragment you
plan to drop. You need to migrate every object appearing from this point on
in the output.

6. Migrate these objects onto earlier fragments in the database.

Objids other than 0 or 99 are objects that you must migrate or drop. You
can migrate a user table by building a new clustered index on the table
(since the segmap was changed, the new allocations will not go on this

You can migrate some system tables (but not all) using the sp_fixindex
command to rebuild its clustered index. However, there are a few system
tables that cannot have their clustered indexes rebuilt, and if they have
any allocations on the usage, you are out of luck.

If the objid is 8, then it is the log. You can migrate the log by ensuring
that another usage has a log segment (segmap 4 or 7). Do enough activity on
the database to fill an extents worth of log pages, then checkpoint and
dump tran.

Once you have moved all the objects, delete the row from sysusages and
reboot the server.

Run dbcc checkdb and dbcc checkalloc on the database to be sure you are ok,
then dump the database again.

Return to top


1.2.13: How do I audit the SQL sent to the server?


This does not seem to be well documented, so here is a quick means of auditing
the SQL text that is sent to the server. Note that this simply audits the SQL
sent to the server. So, if your user process executes a big stored procedure,
all you will see here is a call to the stored procedure. None of the SQL that
is executed as part of the stored procedure will be listed.

Firstly, you need to have installed Sybase security (which involves installing
the sybsecurity database and loading it using the script $SYBASE/scripts/
installsecurity). Read the Sybase Security Administration Manual, you may
want to enable a threshold procedure to toggle between a couple of audit
tables. Be warned, that the default configuration option "suspend auditing
when device full" is set to 1. This means that the server will suspend all
normal SQL operations if the audit database becomes full and the sso logs in
and gets rid of some data. You might want to consider changing this to 0
unless yours is a particularly sensitive installation.

Once that is done, you need to enable auditing. If you haven't already, you
will need to restart ASE in order to start the audit subsystem. Then comes the
bit that does not seem well documented, you need to select an appropriate audit
option, and the one for the SQL text is "cmdtext". From the sybsecurity
database, issue

sp_audit "cmdtext",<username>,"all","on"

for each user on the system that wish to collect the SQL for. sp_audit seems
to imply that you can replace "<username>" with all, but I get the error
message "'all' is not a valid user name". Finally, enable auditing for the
system as a whole using

sp_configure "auditing",1

If someone knows where in the manuals this is well documented, I will add a

Note: The stored procedure sp_audit had a different name under previous
releases. I think that it was called sp_auditoption. Also, to get a full list
of the options and their names, go into sybsecurity and simply run sp_audit
with no arguments.

Return to top


1.2.14: sp_helpdb/sp_helpsegment is returning negative numbers


A number of releases of ASE return negative numbers for sp_helpdb. One solution
given by Sybase is to restart the server. Hmm... not always possible. An
alternative is to use the dbcc command 'usedextents'. Issue the following:

dbcc traceon(3604)
dbcc usedextents(, 0, 1, 1)

and the problem should disappear. This is actually a solved case, Sybase solved
case no: 10454336, go to
=10454336 to see more information.

Return to top


Advanced Administration Basic Administration ASE FAQ

David Owen

Apr 20, 2004, 9:45:07 AM4/20/04
Archive-name: databases/sybase-faq/part10

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

1.5.9: You and showplan output


As recently pointed out in the Sybase-L list, the showplan information that was
here is terribly out of date. It was written back when the output from ASE and
MS SQL Server were identical. (To see just how differenet they have become,
have a look at the O'Reilly book "Transact-SQL Programming". It does a line for
line comparison.) The write up in the Performance and Tuning Guide is
excellent, and this section was doing nothing but causing problems.

If you do have a need for the original document, then it can be found here, but
it will no longer be considered part of the official FAQ.

Back to top


1.5.10: Poor man's sp_sysmon


This is needed for System 10 and Sybase 4.9.2 where there is no sp_sysmon
command available.

Fine tune the waitfor for your application. You may need TS Role -- see Q3.1.

use master
dbcc traceon(3604)
dbcc monitor ("clear", "all", "on")
waitfor delay "00:01:00"
dbcc monitor ("sample", "all", "on")
dbcc monitor ("select", "all", "on")
dbcc traceon(8399)
select field_name, group_name, value
from sysmonitors
dbcc traceoff(8399)
dbcc traceoff(3604)

Back to top


1.5.11: View MRU-LRU procedure cache chain


dbcc procbuf gives a listing of the current contents of the procedure cache. By
repeating the process at intervals it is possible to watch procedures moving
down the MRU-LRU chain, and so to see how long procedures remain in cache. The
neat thing about this approach is that you can size your cache according to
what is actually happening, rather than relying on estimates based on
assumptions that may not hold on your site.

To run it:

dbcc traceon(3604)
dbcc procbuf

If you use sqsh it's a bit easier to grok the output:

dbcc traceon(3604);
dbcc procbuf;|fgrep <pbname>

See Q1.5.7 regarding procedure cache sizing.

Back to top


1.5.12: Improving Text/Image Type Performance


If you know that you are going to be using a text/insert column immediately,
insert the row setting the column to a non-null value.

There's a noticeable performance gain.

Unfortunately, text and image datatypes cannot be passed as parameters to
stored procedures. The address of the text or image location must be created
and returned where it is then manipulated by the calling code. This means that
transactions involving both text and image fields and stored procedures are not
atomic. However, the datatypes can still be declared as not null in the table

Given this example -

create table key_n_text
key int not null,
notes text not null

This stored procedure can be used -

create procedure sp_insert_key_n_text
@key int,
@textptr varbinary(16) output

** Generate a valid text pointer for WRITETEXT by inserting an
** empty string in the text field.
insert key_n_text

select @textptr = textptr(notes)
from key_n_text
where key = @key

return 0

The return parameter is then used by the calling code to update the text field,
via the dbwritetext() function if using DB-Library for example.

Back to top


Server Monitoring General Troubleshooting ASE FAQ

Server Monitoring

1.6.1 What is Monitor Server and how do I configure it?
1.6.2 OK, that was easy, how do I configure a client?

Platform Specific Issues - Solaris Performance and Tuning ASE FAQ


1.6.1: How do I configure Monitor Server?


Monitor Server is a separate server from the normal dataserver. Its purpose, as
the name suggests, is to monitor ASE. It uses internal counters to determine
what is happening. On its own, it does not actually do a lot. You need to hook
up a client of some sort in order to be able to view the results.

Configuration is easy. The Sybase documentation is very good on this one for
either Unix or NT. Rather than repeat myself, go to the Sybase web site and
check out the Monitor Server User's Guide. Obviously the link should take you
to the HTML edition of the book. There is also a PDF available. Look for
"monbook.pdf". If Sybase has skipped to ASE 99.9 and this link no longer works,
then you will have to go search the Sybase home pages.

Back to top


1.6.2: OK, that was easy, how do I configure a client?


I see that you like a challenge! Syase offer a Java client to view the output
from Monitor Server. It is accessible either standalone or via the Win32
edition of Sybase Central.

Standalone on NT/2000

I could not find anything about setting up the clients in the standard
documentation set. However, there is a small paper on it here (towards the
bottom). It does miss out a couple of important details, but is helpful for all

I did not try too hard to get the 11.9.2 version running, since the 12.5
version will monitor 11.9 servers.

I do not have a boxed release of ASE 12.5 for NT, just the developers release.
This does not come with all of the necessary files. In order to run the Monitor
Client, you will need the PC Client CD that came with the boxed release. If all
you have is the developer's edition, you might be stuck. It would be worth
getting in touch with Sybase to see if they could ship you one. There is
probably a charge!

You will need to install the client software. If you have a release of ASE
already installed and running you might want to install this into a separate
area. I am not sure what files it includes and versions etc, but if you have
the space I recommend saving yourself some hassle. If you have an older edition
of ASE installed, the installation will ask if you want to overwrite two files,
mclib.dll and mchelp.dll, both of which should reside in your winnt/system32
directory. It is important that you accept both of the overwrites. The older
versions of these files do not seem to work.

Once installed, you will also need to spend some time playing with environment
variables. I have got 3 editions of ASE all running successfully on the one
machine (see Q1.3.9). I chose to have one user for each ASE instance, each with
their own local environment variables pointing to the relevant installation for
them, plus a generic account for my main user that I configured to use the
software installed from the client CD. I adjusted the variables so that each
user had their own set of variables and all of the installations worked OK.

Next, you need a copy of Java 1.1.8 installed. The client CD has a copy of JDK
1.1.8 in the "ASEP_Win32" directory. This is the one to go for, as I am sure
that it was the one that the Monitor Client was built with. I did try a version
from Sun's Java archive, but it failed.

Next, set up the JAVA_HOME environment variable. If you installed the JDK into
its default location, that will be C:\jdk1.1.8.

Check to ensure that your CLASSPATH is defined as (assuming that you installed
the client into C:\Sybase_Client):


You may want to check that the files mclib.dll and mchelp.dll exist in your
winnt/system32 directory if you were not asked to replace them earlier. You may
also want to check that the defauly Java command is correct with java -version.
It should return

java version "1.1.8"

You should now be able to fire up the main window with:

java sybase.monclt.mcgui.procact.ProcActApp 12.5 sa "sa_password" en 0 sccsen.hlp

(The paper says that you should use "jre" and not "java". That gives me a
cosistent "Class not found...". I do not know why.)

You should be presented with a screen like this, which will fill with process
information after 10 seconds. Choose "File->Monitors >" to choose a monitoring
graph. Here are a couple of screenshots from various monitors:

* Performance Summary
* Performance Trends...
* Process Current SQL Statement
* Network Activity

Obviously, all of this can be set from the command line or via a batch script.
Shove the following into a file called mon.bat and invoke using mon ASE_SERVER

java sybase.monclt.mcgui.procact.ProcActApp %1 12.5 %2 sa "%3" en 0 scssen.hlp

Obviously, you will need to replace "C:\SYBASE_CLIENT" with the correct string
pointing to your Sybase ASE installation.

Via Sybase Central on NT/2000

You will need to have installed the version of the Java Development Kit that
comes with your CD, as per standalone installation. Next, create a shortcut to
the file %SYBASE%\Sybase Central 3.2\win32\scview.exe. This is the Win 32
version of Sybase Central. Next, edit the shortcut's properties (right click on
the shortcut and select "Properties"). Now, edit the "Start In" field to be "C:
\jdk1.1.8\bin", assuming that you installed the JDK into its default location.

Now, assuming that both the ASE and Monitor servers are running, start up this
version of Sybase Central. Unlike the Java edition, all of the Servers from the
SQL.INI file are displayed at startup. Right click on the ASE server you wish
to monitor and select "Properties". This brings up a triple tabbed screen.
Select the "Monitor Server" tab and use the drop down to select the appropriate
monitor server. Now, connect to the ASE server and you will see another level
in the options tree called "Monitors". Click on it and you should see a
complete list of the monitors you can choose from. Double clicking on one
should display it. The output is exactly the same as for standalone operation.

Back to top


Platform Specific Issues - Solaris Performance and Tuning ASE FAQ

David Owen

Apr 20, 2004, 9:45:07 AM4/20/04
Archive-name: databases/sybase-faq/part9

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

1.5.7: How much memory to configure?


System 10 and below.


At some point you'll wonder if your ASE has been configured with sufficient
memory. We hope that it's not during some crisis but that's probably when it'll

The most important thing in setting up memory for a ASE is that it has to be
large enough to accommodate:

* concurrent user connections
* active procedures
* and concurrent open databases.

By not setting the ASE up correctly it will affect the performance of it. A
delicate balance needs to be struck where your ASE is large enough to
accommodate the users but not too large where it adversely affects the CPU
Server (such as causing swapping).

Assumptions made of the reader:

* The reader has some experience administering ASEs.
* All queries have been tuned and that there are no unnecessary table scans.


As the ASE starts up, it pre-allocates its structures to support the
configuration. The memory that remains after the pre-allocation phase is the
available cache.

The available cache is partitioned into two pieces:

1. buffer cache - data pages to be sent to a user connection or flushed to
2. procedure cache - where query plans live.

The idea is to determine if the buffer cache and the procedure cache are of
adequate size. As a DBA you can use dbcc memusage to ascertain this.

The information provided from a dbcc memusage, daunting at first, but taken in
sections, is easy to understand and provides the DBA with the vital information
that is necessary to determine if more memory is required and where it is

If the procedure cache is too small, user connections will get sporadic 701's:

There is insufficient system memory to run this query.

If the buffer cache is too small, response time may be poor or spiky.

The following text describes how to interpret the output of dbcc memusage and
to correlate this back to the fundamental question:

Does my ASE have enough memory?


Before delving into the world of dbcc memusage some definitions to get us

Buffer Cache (also referred to as the Data Cache)
Area of memory where ASE stores the most recently used data pages and index
pages in 2K page units. If ASE finds a data page or index page in the
buffer cache, it doesn't need to perform a physical I/O (it is reported as
a logical I/O). If a user connection selects data from a database, the ASE
loads the 2K data page(s) here and then hands the information off to the
user connection. If a user connection updates data, these pages are
altered, and then they are flushed out to disk by the ASE.

This is a bit simplistic but it'll do. Read on for more info though.

The cache is maintained as a doubly linked list. The head of the list
is where the most recently used pages are placed. Naturally towards the
tail of the chain are the least recently used pages. If a page is
requested and it is found on the chain, it is moved back to the front
of the chain and the information is relayed, thus saving a physical I/

But wait! this recycling is not done forever. When a checkpoint occurs
any dirty pages are flushed. Also, the parameter cbufwashsize
determines how many times a page containing data can be recycled before
it has to be flushed out to disk. For OAM and index pages the following
parameters apply coamtrips and cindextrips respectively.

Procedure Cache
Area of memory where ASE stores the most recently used query plans of
stored procedures and triggers. This procedure cache is also used by the
Server when a procedure is being created and when a query is being
compiled. Just like the buffer cache, if SQL Server finds a procedure or a
compilation already in this cache, it doesn't need to read it from the

The size of procedure cache is determined by the percentage of remaining
memory configured for this Server parameter after ASE memory needs are met.

Available Cache

When the ASE starts up it pre-allocates its data structures to support the
current configuration. For example, based on the number of user connections,
additional netmem, open databases and so forth the dataserver pre-allocates how
much memory it requires to support these configured items.

What remains after the pre-allocation is the available cache. The available
cache is divided into buffer cache and procedure cache. The sp_configure
"procedure cache" parameter determines the percentage breakdown. A value of 20
would read as follows:

20% of the available cache is dedicated to the procedure cache and 80% is
dedicated to the buffer cache.

Your pal: dbcc memusage

dbcc memusage takes a snapshot of your ASE's current memory usage and reports
this vital information back to you. The information returned provides
information regarding the use of your procedure cache and how much of the
buffer cache you are currently using.

An important piece of information is the size of the largest query plan. We'll
talk about that more below.

It is best to run dbcc memusage after your ASE has reached a working set. For
example, at the end of the day or during lunch time.

Running dbcc memusage will freeze the dataserver while it does its work.
The more memory you have configured for the ASE the longer it'll take. Our
experience is that for a ASE with 300MB it'll take about four minutes to
execute. During this time, nothing else will execute: no user queries, no

In order to run dbcc memusage you must have sa privileges. Here's a sample
execution for discussion purposes:

1> /* send the output to the screen instead of errorlog */
2> dbcc traceon(3604)
3> go
1> dbcc memusage
2> go
Memory Usage:

Meg. 2K Blks Bytes

Configured Memory:300.0000 153600 314572800

Code size: 2.6375 1351 2765600
Kernel Structures: 77.6262 39745 81396975
Server Structures: 54.4032 27855 57045920
Page Cache:129.5992 66355 135894640
Proc Buffers: 1.1571 593 1213340
Proc Headers: 25.0840 12843 26302464

Number of page buffers: 63856
Number of proc buffers: 15964

Buffer Cache, Top 20:

DB Id Object Id Index Id 2K Buffers

6 927446498 0 9424
6 507969006 0 7799
6 959446612 0 7563
6 116351649 0 7428
6 2135014687 5 2972
6 607445358 0 2780
6 507969006 2 2334
6 2135014687 0 2047
6 506589013 0 1766
6 1022066847 0 1160
6 116351649 255 987
6 927446498 8 897
6 927446498 10 733
6 959446612 7 722
6 506589013 1 687
6 971918604 0 686
6 116351649 6 387

Procedure Cache, Top 20:

Database Id: 6
Object Id: 1652357121
Object Name: lp_cm_case_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 16
Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages
Database Id: 6
Object Id: 1668357178
Object Name: lp_cm_subcase_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 10
Size of plans: 0.202827 Mb, 212680.000000 bytes, 110 pages
Database Id: 6
Object Id: 132351706
Object Name: csp_get_case
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 9
Size of plans: 0.149792 Mb, 157068.000000 bytes, 81 pages
Database Id: 6
Object Id: 1858261845
Object Name: lp_get_last_caller_new
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 2
Size of plans: 0.054710 Mb, 57368.000000 bytes, 30 pages

1> /* redirect output back to the errorlog */
2> dbcc traceoff(3604)
3> go

Dissecting memusage output

The output may appear overwhelming but it's actually pretty easy to parse.
Let's look at each section.

Memory Usage

This section provides a breakdown of the memory configured for the ASE.

Memory Usage:

Meg. 2K Blks Bytes

Configured Memory:300.0000 153600 314572800

Code size: 2.6375 1351 2765600
Kernel Structures: 77.6262 39745 81396975
Server Structures: 54.4032 27855 57045920
Page Cache:129.5992 66355 135894640
Proc Buffers: 1.1571 593 1213340
Proc Headers: 25.0840 12843 26302464

Number of page buffers: 63856
Number of proc buffers: 15964

The Configured Memory does not equal the sum of the individual components.
It does in the sybooks example but in practice it doesn't always. This is
not critical and it is simply being noted here.

The Kernel Structures and Server structures are of mild interest. They can be
used to cross-check that the pre-allocation is what you believe it to be. The
salient line items are Number of page buffers and Number of proc buffers.

The Number of proc buffers translates directly to the number of 2K pages
available for the procedure cache.

The Number of page buffers is the number of 2K pages available for the buffer

As a side note and not trying to muddle things, these last two pieces of
information can also be obtained from the errorlog:

... Number of buffers in buffer cache: 63856.
... Number of proc buffers allocated: 15964.

In our example, we have 15,964 2K pages (~32MB) for the procedure cache and
63,856 2K pages (~126MB) for the buffer cache.

Buffer Cache

The buffer cache contains the data pages that the ASE will be either flushing
to disk or transmitting to a user connection.

If this area is too small, the ASE must flush 2K pages sooner than might be
necessary to satisfy a user connection's request.

For example, in most database applications there are small edit tables that are
used frequently by the application. These tables will populate the buffer cache
and normally will remain resident during the entire life of the ASE. This is
good because a user connection may request validation and the ASE will find the
data page(s) resident in memory. If however there is insufficient memory
configured, then these small tables will be flushed out of the buffer cache in
order to satisfy another query. The next time a validation is requested, the
tables will have to be re-read from disk in order to satisfy the request. Your
performance will degrade.

Memory access is easily an order of magnitude faster than performing a physical

In this example we know from the previous section that we have 63,856 2K pages
(or buffers) available in the buffer cache. The question to answer is, "do we
have sufficient buffer cache configured?"

The following is the output of the dbcc memusage regarding the buffer cache:

Buffer Cache, Top 20:

DB Id Object Id Index Id 2K Buffers

6 927446498 0 9424
6 507969006 0 7799
6 959446612 0 7563
6 116351649 0 7428
6 2135014687 5 2972
6 607445358 0 2780
6 507969006 2 2334
6 2135014687 0 2047
6 506589013 0 1766
6 1022066847 0 1160
6 116351649 255 987
6 927446498 8 897
6 927446498 10 733
6 959446612 7 722
6 506589013 1 687
6 971918604 0 686
6 116351649 6 387
Index Legend
| | |
| Value | Definition |
| 0 | Table data |
| 1 | Clustered index |
| 2-250 | Nonclustered |
| | indexes |
| 255 | Text pages |

* To translate the DB Id use select db_name(#) to map back to the database
* To translate the Object Id, use the respective database and use the select
object_name(#) command.

It's obvious that the first 10 items take up the largest portion of the buffer
cache. Sum these values and compare the result to the amount of buffer cache

Summing the 10 items nets a result of 45,263 2K data pages. Comparing that to
the number of pages configured, 63,856, we see that this ASE has sufficient
memory configured.

When do I need more Buffer Cache?

I follow the following rules of thumb to determine when I need more buffer

* If the sum of all the entries reported is equal to the number of pages
configured and all entries are relatively the same size. Crank it up.
* Note the natural groupings that occur in the example. If the difference
between any of the groups is greater than an order of magnitude I'd be
suspicious. But only if the sum of the larger groups is very close to the
number of pages configured.

Procedure Cache

If the procedure cache is not of sufficient size you may get sporadic 701

There is insufficient system memory to run this query.

In order to calculate the correct procedure cache one needs to apply the
following formula (found in ASE Troubleshooting Guide - Chapter 2, Procedure
Cache Sizing):

proc cache size = max(# of concurrent users) * (size of the largest plan) *

The flaw with the above formula is that if 10% of the users are
executing the largest plan, then you'll overshoot. If you have distinct
classes of connections whose largest plans are mutually exclusive then
you need to account for that:

ttl proc cache = proc cache size * x% + proc cache size * y% ...

The max(# of concurrent users) is not the number of user connections configured
but rather the actual number of connections during the peak period.

To compute the size of the largest [query] plan take the results from the dbcc
memusage's, Procedure Cache section and apply the following formula:

query plan size = [size of plans in bytes] / [number of plans]

We can compute the size of the query plan for lp_cm_case_list by using the
output of the dbcc memusage:

Database Id: 6
Object Id: 1652357121
Object Name: lp_cm_case_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 16
Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages

Entering the respective numbers, the query plan size for lp_cm_case_list is

query plan size = 339072 / 16
query plan size = 21192 bytes or 21K

The formula would be applied to all objects found in the procedure cache and
the largest value would be plugged into the procedure cache size formula:

Query Plan Sizes
| | |
| | Query |
| Object | Plan |
| | Size |
| lp_cm_case_list | 21K |
| lp_cm_subcase_list | 21K |
| csp_get_case | 19K |
| lp_get_last_caller_new | 28K |

The size of the largest [query] plan is 28K.

Entering these values into the formula:

proc cache size = max(# of concurrent users) * (size of the largest plan) *
proc cache size = 491 connections * 28K * 1.25
proc cache size = 17,185 2K pages required

Our example ASE has 15,964 2K pages configured but 17,185 2K pages are
required. This ASE can benefit by having more procedure cache configured.

This can be done one of two ways:

1. If you have some headroom in your buffer cache, then sp_configure
"procedure cache" to increase the ratio of procedure cache to buffer cache

procedure cache =
[ proposed procedure cache ] /
( [ current procedure cache ] + [ current buffer cache ] )

The new procedure cache would be 22%:

procedure cache = 17,185 / ( 15,964 + 63,856 )
procedure cache = .2152 or 22%

2. If the buffer cache cannot be shrunken, then sp_configure "memory" to
increase the total memory:

mem size =
([ proposed procedure cache ]) /
([ current procedure cache ] / [ current configured memory ])

The new memory size would be 165,399 2K pages, assuming that the
procedure cache is unchanged:

mem size = 17,185 / ( 15,964 / 153,600 )
mem size = 165,399 2K pages

Back to top


1.5.8: Why should I use stored procedures?


There are many advantages to using stored procedures (unfortunately they do not
handle the text/image types):

* Security - you can revoke access to the base tables and only allow users to
access and manipulate the data via the stored procedures.
* Performance - stored procedures are parsed and a query plan is compiled.
This information is stored in the system tables and it only has to be done
* Network - if you have users who are on a WAN (slow connection) having
stored procedures will improve throughput because less bytes need to flow
down the wire from the client to ASE.
* Tuning - if you have all your SQL code housed in the database, then it's
easy to tune the stored procedure without affecting the clients (unless of
course the parameter change).
* Modularity - during application development, the application designer can
concentrate on the front-end and the DB designer can concentrate on the
* Network latency - a client on a LAN may seem slower if it is sending large
numbers of separate requests to a database server, bundling them into one
procedure call may improve responsiveness. Also, servers handling large
numbers of small requests can spend a surprising amount of CPU time
performing network IO.
* Minimise blocks and deadlocks - it is a lot easier to handle a deadlock if
the entire transaction is performed in one database request, also locks
will be held for a shorter time, improving concurrency and potentially
reducing the number of deadlocks. Further, it is easier to ensure that all
tables are accessed in a consistent order if code is stored centrally
rather than dispersed among a number of apps.

Back to top


David Owen

Apr 20, 2004, 9:45:06 AM4/20/04
Archive-name: databases/sybase-faq/part8

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Performance and Tuning

1.5.1 What are the nitty gritty details on Performance and Tuning?
1.5.2 What is best way to use temp tables in an OLTP environment?
1.5.3 What's the difference between clustered and non-clustered indexes?
1.5.4 Optimistic versus pessimistic locking?
1.5.5 How do I force an index to be used?
1.5.6 Why place tempdb and log on low numbered devices?
1.5.7 Have I configured enough memory for ASE?
1.5.8 Why should I use stored procedures?
1.5.9 I don't understand showplan's output, please explain.
1.5.10 Poor man's sp_sysmon.
1.5.11 View MRU-LRU procedure cache chain.
1.5.12 Improving Text/Image Type Performance

Server Monitoring General Troubleshooting ASE FAQ


1.5.1: Sybase ASE Performance and Tuning


Before going any further, Eric Miner ( has made available
two presentations that he made at Techwave 1999. The first covers the use of
optdiag. The second covers features in the way the optimiser works in ASE
11.9.2 and 12. These are Powerpoint slides converted to web pages, so they
might be tricky to read with a text based browser!

All Components Affect Response Time & Throughput

We often think that high performance is defined as a fast data server, but the
picture is not that simple. Performance is determined by all these factors:

* The client application itself:
+ How efficiently is it written?
+ We will return to this later, when we look at application tuning.
* The client-side library:
+ What facilities does it make available to the application?
+ How easy are they to use?
* The network:
+ How efficiently is it used by the client/server connection?
* The DBMS:
+ How effectively can it use the hardware?
+ What facilities does it supply to help build efficient fast
* The size of the database:
+ How long does it take to dump the database?
+ How long to recreate it after a media failure?

Unlike some products which aim at performance on paper, Sybase aims at solving
the multi-dimensional problem of delivering high performance for real


To gain an overview of important considerations and alternatives for the
design, development, and implementation of high performance systems in the
Sybase client/server environment. The issues we will address are:

* Client Application and API Issues
* Physical Database Design Issues
* Networking Issues
* Operating System Configuration Issues
* Hardware Configuration Issues
* ASE Configuration Issues

Client Application and Physical Database Design design decisions will
account for over 80% of your system's "tuneable" performance so ... plan
your project resources accordingly !

It is highly recommended that every project include individuals who have taken
Sybase Education's Performance and Tuning course. This 5-day course provides
the hands-on experience essential for success.

Client Application Issues

* Tuning Transact-SQL Queries
* Locking and Concurrency
* ANSI Changes Affecting Concurrency
* Application Deadlocking
* Optimizing Cursors in v10
* Special Issues for Batch Applications
* Asynchronous Queries
* Generating Sequential Numbers
* Other Application Issues

Tuning Transact-SQL Queries

* Learn the Strengths and Weaknesses of the Optimizer
* One of the largest factors determining performance is TSQL! Test not only
for efficient plans but also semantic correctness.
* Optimizer will cost every permutation of accesses for queries involving 4
tables or less. Joins of more than 4 tables are "planned" 4-tables at a
time (as listed in the FROM clause) so not all permutations are evaluated.
You can influence the plans for these large joins by the order of tables in
the FROM clause.
* Avoid the following, if possible:
+ What are SARGS?

This is short for search arguments. A search argument is essentially a
constant value such as:
o "My company name"
o 3448

but not:
o 344 + 88
o like "%what you want%"
+ Mathematical Manipulation of SARGs

SELECT name FROM employee WHERE salary * 12 > 100000

+ Use of Incompatible Datatypes Between Column and its SARG

Float & Int, Char & Varchar, Binary & Varbinary are Incompatible;

Int & Intn (allow nulls) OK

+ Use of multiple "OR" Statements - especially on different columns in
same table. If any portion of the OR clause requires a table scan, it
will! OR Strategy requires additional cost of creating and sorting a
work table.
+ Not using the leading portion of the index (unless the query is
completely covered)
+ Substituting "OR" with "IN (value1, value2, ... valueN) Optimizer
automatically converts this to an "OR"
+ Use of Non-Equal Expressions (!=) in WHERE Clause.
* Use Tools to Evaluate and Tune Important/Problem Queries
+ Use the "set showplan on" command to see the plan chosen as "most
efficient" by optimizer. Run all queries through during development and
testing to ensure accurate access model and known performance.
Information comes through the Error Handler of a DB-Library
+ Use the "dbcc traceon(3604, 302, 310)" command to see each alternative
plan evaluated by the optimizer. Generally, this is only necessary to
understand why the optimizer won't give you the plan you want or need
(or think you need)!
+ Use the "set statistics io on" command to see the number of logical and
physical i/o's for a query. Scrutinize those queries with high logical
+ Use the "set statistics time on" command to see the amount of time
(elapsed, execution, parse and compile) a query takes to run.
+ If the optimizer turns out to be a "pessimizer", use the "set forceplan
on" command to change join order to be the order of the tables in the
FROM clause.
+ If the optimizer refuses to select the proper index for a table, you
can force it by adding the index id in parentheses after the table name
in the FROM clause.

SELECT * FROM orders(2), order_detail(1) WHERE ...

This may cause portability issues should index id's vary/change by
site !

Locking and Concurrency

* The Optimizer Decides on Lock Type and Granularity
* Decisions on lock type (share, exclusive, or update) and granularity (page
or table) are made during optimization so make sure your updates and
deletes don't scan the table !
* Exclusive Locks are Only Released Upon Commit or Rollback
* Lock Contention can have a large impact on both throughput and response
time if not considered both in the application and database design !
* Keep transactions as small and short as possible to minimize blocking.
Consider alternatives to "mass" updates and deletes such as a v10.0 cursor
in a stored procedure which frequently commits.
* Never include any "user interaction" in the middle of transactions.
* Shared Locks Generally Released After Page is Read
* Share locks "roll" through result set for concurrency. Only "HOLDLOCK" or
"Isolation Level 3" retain share locks until commit or rollback. Remember
also that HOLDLOCK is for read-consistency. It doesn't block other readers
* Use optimistic locking techniques such as timestamps and the tsequal()
function to check for updates to a row since it was read (rather than

ANSI Changes Affecting Concurrency

* Chained Transactions Risk Concurrency if Behavior not Understood
* Sybase defaults each DML statement to its own transaction if not specified
* ANSI automatically begins a transaction with any SELECT, FETCH, OPEN,
INSERT, UPDATE, or DELETE statement ;
* If Chained Transaction must be used, extreme care must be taken to ensure
locks aren't left held by applications unaware they are within a
transaction! This is especially crucial if running at Level 3 Isolation
* Lock at the Level of Isolation Required by the Query
* Read Consistency is NOT a requirement of every query.
* Choose level 3 only when the business model requires it
* Running at Level 1 but selectively applying HOLDLOCKs as needed is safest
* If you must run at Level 3, use the NOHOLDLOCK clause when you can !
* Beware of (and test) ANSI-compliant third-party applications for

Application Deadlocking

Prior to ASE 10 cursors, many developers simulated cursors by using two or more
connections (dbproc's) and divided the processing between them. Often, this
meant one connection had a SELECT open while "positioned" UPDATEs and DELETEs
were issued on the other connection. The approach inevitably leads to the
following problem:

1. Connection A holds a share lock on page X (remember "Rows Pending" on SQL
Server leave a share lock on the "current" page).
2. Connection B requests an exclusive lock on the same page X and waits...
3. The APPLICATION waits for connection B to succeed before invoking whatever
logic will remove the share lock (perhaps dbnextrow). Of course, that never
happens ...

Since Connection A never requests a lock which Connection B holds, this is NOT
a true server-side deadlock. It's really an "application" deadlock !

Design Alternatives

1. Buffer additional rows in the client that are "nonupdateable". This forces
the shared lock onto a page on which the application will not request an
exclusive lock.
2. Re-code these modules with CT-Library cursors (aka. server-side cursors).
These cursors avoid this problem by disassociating command structures from
connection structures.
3. Re-code these modules with DB-Library cursors (aka. client-side cursors).
These cursors avoid this problem through buffering techniques and
re-issuing of SELECTs. Because of the re-issuing of SELECTs, these cursors
are not recommended for high transaction sites !

Optimizing Cursors with v10.0

* Always Declare Cursor's Intent (i.e. Read Only or Updateable)
* Allows for greater control over concurrency implications
* If not specified, ASE will decide for you and usually choose updateable
* Updateable cursors use UPDATE locks preventing other U or X locks
* Updateable cursors that include indexed columns in the update list may
table scan
* SET Number of Rows for each FETCH
* Allows for greater Network Optimization over ANSI's 1- row fetch
* Rows fetched via Open Client cursors are transparently buffered in the
FETCH -> Open Client <- N rows
* Keep Cursor Open on a Commit / Rollback
* ANSI closes cursors with each COMMIT causing either poor throughput (by
making the server re-materialize the result set) or poor concurrency (by
holding locks)
* Open Multiple Cursors on a Single Connection
* Reduces resource consumption on both client and Server
* Eliminates risk of a client-side deadlocks with itself

Special Issues for Batch Applications

ASE was not designed as a batch subsystem! It was designed as an RBDMS for
large multi-user applications. Designers of batch-oriented applications should
consider the following design alternatives to maximize performance :

Design Alternatives :

* Minimize Client/Server Interaction Whenever Possible
* Don't turn ASE into a "file system" by issuing single table / single row
requests when, in actuality, set logic applies.
* Maximize TDS packet size for efficient Interprocess Communication (v10
* New ASE 10.0 cursors declared and processed entirely within stored
procedures and triggers offer significant performance gains in batch
* Investigate Opportunities to Parallelize Processing
* Breaking up single processes into multiple, concurrently executing,
connections (where possible) will outperform single streamed processes
* Make Use of TEMPDB for Intermediate Storage of Useful Data

Asynchronous Queries

Many, if not most, applications and 3rd Party tools are coded to send queries
with the DB-Library call dbsqlexec( ) which is a synchronous call ! It sends a
query and then waits for a response from ASE that the query has completed !

Designing your applications for asynchronous queries provides many benefits:

1. A "Cooperative" multi-tasking application design under Windows will allow
users to run other Windows applications while your long queries are
processed !
2. Provides design opportunities to parallize work across multiple ASE

Implementation Choices:

* System 10 Client Library Applications:
* True asynchronous behaviour is built into the entire library. Through the
appropriate use of call-backs, asynchronous behavior is the normal
processing paradigm.
* Windows DB-Library Applications (not true async but polling for data):
* Use dbsqlsend(), dbsqlok(), and dbdataready() in conjunction with some
additional code in WinMain() to pass control to a background process. Code
samples which outline two different Windows programming approaches (a
PeekMessage loop and a Windows Timer approach) are available in the
Microsoft Software Library on Compuserve (GO MSL). Look for SQLBKGD.ZIP
* Non-PC DB-Library Applications (not true async but polling for data):
* Use dbsqlsend(), dbsqlok(), and dbpoll() to utilize non-blocking functions.

Generating Sequential Numbers Many applications use unique sequentially
increasing numbers, often as primary keys. While there are good benefits to
this approach, generating these keys can be a serious contention point if not
careful. For a complete discussion of the alternatives, download Malcolm
Colton's White Paper on Sequential Keys from the SQL Server Library of our
OpenLine forum on Compuserve.

The two best alternatives are outlined below.

1. "Primary Key" Table Storing Last Key Assigned
+ Minimize contention by either using a seperate "PK" table for each user
table or padding out each row to a page. Make sure updates are
+ Don't include the "PK" table's update in the same transaction as the
INSERT. It will serialize the transactions.

UPDATE pk_table SET nextkey = nextkey + 1
[WHERE table_name = @tbl_name]

/* Now retrieve the information */
SELECT nextkey FROM pk_table
WHERE table_name = @tbl_name]

+ "Gap-less" sequences require additional logic to store and retrieve
rejected values
2. IDENTITY Columns (v10.0 only)
+ Last key assigned for each table is stored in memory and automatically
included in all INSERTs (BCP too). This should be the method of choice
for performance.
+ Choose a large enough numeric or else all inserts will stop once the
max is hit.
+ Potential rollbacks in long transactions may cause gaps in the sequence

Other Application Issues

+ Transaction Logging Can Bottleneck Some High Transaction Environments
+ Committing a Transaction Must Initiate a Physical Write for
+ Implementing multiple statements as a transaction can assist in these
environment by minimizing the number of log writes (log is flushed to
disk on commits).
+ Utilizing the Client Machine's Processing Power Balances Load
+ Client/Server doesn't dictate that everything be done on Server!
+ Consider moving "presentation" related tasks such as string or
mathematical manipulations, sorting, or, in some cases, even
aggregating to the client.
+ Populating of "Temporary" Tables Should Use "SELECT INTO" - balance
this with dynamic creation of temporary tables in an OLTP environment.
Dynamic creation may cause blocks in your tempdb.
+ "SELECT INTO" operations are not logged and thus are significantly
faster than there INSERT with a nested SELECT counterparts.
+ Consider Porting Applications to Client Library Over Time
+ True Asynchronous Behavior Throughout Library
+ Array Binding for SELECTs
+ Dynamic SQL
+ Support for ClientLib-initiated callback functions
+ Support for Server-side Cursors
+ Shared Structures with Server Library (Open Server 10)

Physical Database Design Issues

+ Normalized -vs- Denormalized Design
+ Index Selection
+ Promote "Updates-in-Place" Design
+ Promote Parallel I/O Opportunities

Normalized -vs- Denormalized

+ Always Start with a Completely Normalized Database
+ Denormalization should be an optimization taken as a result of a
performance problem
+ Benefits of a normalized database include :
1. Accelerates searching, sorting, and index creation since tables are
2. Allows more clustered indexes and hence more flexibility in tuning
queries, since there are more tables ;
3. Accelerates index searching since indexes tend to be narrower and
perhaps shorter ;
4. Allows better use of segments to control physical placement of
tables ;
5. Fewer indexes per table, helping UPDATE, INSERT, and DELETE
performance ;
6. Fewer NULLs and less redundant data, increasing compactness of the
database ;
7. Accelerates trigger execution by minimizing the extra integrity
work of maintaining redundant data.
8. Joins are Generally Very Fast Provided Proper Indexes are Available
9. Normal caching and cindextrips parameter (discussed in Server
section) means each join will do on average only 1-2 physical I/Os.
10. Cost of a logical I/O (get page from cache) only 1-2 milliseconds.
3. There Are Some Good Reasons to Denormalize
1. All queries require access to the "full" set of joined data.
2. Majority of applications scan entire tables doing joins.
3. Computational complexity of derived columns require storage for SELECTs
4. Others ...

Index Selection

+ Without a clustered index, all INSERTs and "out-of-place" UPDATEs go to
the last page. The lock contention in high transaction environments
would be prohibitive. This is also true for INSERTs to a clustered
index on a monotonically increasing key.
+ High INSERT environments should always cluster on a key which provides
the most "randomness" (to minimize lock / device contention) that is
usable in many queries. Note this is generally not your primary key !
+ Prime candidates for clustered index (in addition to the above) include
o Columns Accessed by a Range
o Columns Used with Order By, Group By, or Joins
+ Indexes Help SELECTs and Hurt INSERTs
+ Too many indexes can significantly hurt performance of INSERTs and
"out-of-place" UPDATEs.
+ Prime candidates for nonclustered indexes include :
o Columns Used in Queries Requiring Index Coverage
o Columns Used to Access Less than 20% (rule of thumb) of the Data.
+ Unique indexes should be defined as UNIQUE to help the optimizer
+ Minimize index page splits with Fillfactor (helps concurrency and
minimizes deadlocks)
+ Keep the Size of the Key as Small as Possible
+ Accelerates index scans and tree traversals
+ Use small datatypes whenever possible . Numerics should also be used
whenever possible as they compare faster than strings.

Promote "Update-in-Place" Design

+ "Update-in-Place" Faster by Orders of Magnitude
+ Performance gain dependent on number of indexes. Recent benchmark (160
byte rows, 1 clustered index and 2 nonclustered) showed 800%
+ Alternative ("Out-of-Place" Update) implemented as a physical DELETE
followed by a physical INSERT. These tactics result in:
1. Increased Lock Contention
2. Increased Chance of Deadlock
3. Decreased Response Time and Throughput
+ Currently (System 10 and below), Rules for "Update-in-Place" Behavior
Include :
1. Columns updated can not be variable length or allow nulls
2. Columns updated can not be part of an index used to locate the row
to update
3. No update trigger on table being updated (because the inserted and
deleted tables used in triggers get their data from the log)

In v4.9.x and below, only one row may be affected and the
optimizer must know this in advance by choosing a UNIQUE index.
System 10 eliminated this limitation.

Promote Parallel I/O Opportunities

+ For I/O-bound Multi-User Systems, Use A lot of Logical and Physical
+ Plan balanced separation of objects across logical and physical
+ Increased number of physical devices (including controllers) ensures
physical bandwidth
+ Increased number of logical Sybase devices ensures minimal contention
for internal resources. Look at SQL Monitor's Device I/O Hit Rate for
clues. Also watch out for the 128 device limit per database.
+ Create Database (in v10) starts parallel I/O on up to 6 devices at a
time concurrently. If taken advantage of, expect an 800% performance
gain. A 2Gb TPC-B database that took 4.5 hours under 4.9.1 to create
now takes 26 minutes if created on 6 independent devices !
+ Use Sybase Segments to Ensure Control of Placement

This is the only way to guarantee logical seperation of objects on
devices to reduce contention for internal resources.

+ Dedicate a seperate physical device and controller to the transaction
log in tempdb too.
+ optimize TEMPDB Also if Heavily Accessed
+ increased number of logical Sybase devices ensures minimal contention
for internal resources.
+ systems requiring increased log throughput today must partition
database into separate databases

Breaking up one logical database into multiple smaller databases
increases the number number of transaction logs working in parallel.

Networking Issues

+ Choice of Transport Stacks
+ Variable Sized TDS Packets
+ TCP/IP Packet Batching

Choice of Transport Stacks for PCs

+ Choose a Stack that Supports "Attention Signals" (aka. "Out of Band
+ Provides for the most efficient mechanism to cancel queries.
+ Essential for sites providing ad-hoc query access to large databases.
+ Without "Attention Signal" capabilities (or the urgent flag in the
connection string), the DB-Library functions DBCANQUERY ( ) and
DBCANCEL ( ) will cause ASE to send all rows back to the Client
DB-Library as quickly as possible so as to complete the query. This can
be very expensive if the result set is large and, from the user's
perspective, causes the application to appear as though it has hung.
+ With "Attention Signal" capabilities, Net-Library is able to send an
out-of-sequence packet requesting the ASE to physically throw away any
remaining results providing for instantaneous response.
+ Currently, the following network vendors and associated protocols
support the an "Attention Signal" capable implementation:
1. NetManage NEWT
3. Named Pipes (10860) - Do not use urgent parameter with this Netlib
4. Novell LAN Workplace v4.1 0 Patch required from Novell
5. Novell SPX - Implemented internally through an "In-Band" packet
6. Wollongong Pathway
7. Microsoft TCP - Patch required from Microsoft

Variable-sized TDS Packets

Pre-v4.6 TDS Does Not Optimize Network Performance Current ASE TDS packet
size limited to 512 bytes while network frame sizes are significantly
larger (1508 bytes on Ethernet and 4120 bytes on Token Ring).

The specific protocol may have other limitations!

For example:
+ IPX is limited to 576 bytes in a routed network.
+ SPX requires acknowledgement of every packet before it will send
another. A recent benchmark measured a 300% performance hit over TCP in
"large" data transfers (small transfers showed no difference).
+ Open Client Apps can "Request" a Larger Packet Shown to have
significant performance improvement on "large" data transfers such as
BCP, Text / Image Handling, and Large Result Sets.
o clients:
# isql -Usa -Annnnn
# bcp -Usa -Annnnn
# ct_con_props (connection, CS_SET, CS_PACKETSIZE, &packetsize,
sizeof(packetsize), NULL)
o An "SA" must Configure each Servers' Defaults Properly
# sp_configure "default packet size", nnnnn - Sets default packet
size per client connection (defaults to 512)
# sp_configure "maximum packet size", nnnnn - Sets maximum TDS
packet size per client connection (defaults to 512)
# sp_configure "additional netmem", nnnnn - Additional memory for
large packets taken from separate pool. This memory does not
come from the sp_configure memory setting.

Optimal value = ((# connections using large packets large
packetsize * 3) + an additional 1-2% of the above calculation
for overhead)

Each connection using large packets has 3 network buffers: one
to read; one to write; and one overflow.
@ Default network memory - Default-sized packets come from
this memory pool.
@ Additional Network memory - Big packets come this memory

If not enough memory is available in this pool, the server
will give a smaller packet size, down to the default

TCP/IP Packet Batching

+ TCP Networking Layer Defaults to "Packet Batching"
+ This means that TCP/IP will batch small logical packets into one larger
physical packet by briefly delaying packets in an effort to fill the
physical network frames (Ethernet, Token-Ring) with as much data as
+ Designed to improve performance in terminal emulation environments
where there are mostly only keystrokes being sent across the network.
+ Some Environments Benefit from Disabling Packet Batching
+ Applies mainly to socket-based networks (BSD) although we have seen
some TLI networks such as NCR's benefit.
+ Applications sending very small result sets or statuses from sprocs
will usually benefit. Benchmark with your own application to be sure.
+ This makes ASE open all connections with the TCP_NODELAY option.
Packets will be sent regardless of size.
+ To disable packet batching, in pre-Sys 11, start ASE with the 1610
Trace Flag.

$SYBASE/dataserver -T1610 -d /usr/u/sybase/master.dat ...

Your errorlog will indicate the use of this option with the message:

ASE booted with TCP_NODELAY enabled.

Operating System Issues

+ Never Let ASE Page Fault
+ It is better to configure ASE with less memory and do more physical
database I/O than to page fault. OS page faults are synchronous and
stop the entire dataserver engine until the page fault completes. Since
database I/O's are asynchronous, other user tasks can continue!
+ Use Process Affinitying in SMP Environments, if Supported
+ Affinitying dataserver engines to specific CPUs minimizes overhead
associated with moving process information (registers, etc) between
CPUs. Most implementations will preference other tasks onto other CPUs
as well allowing even more CPU time for dataserver engines.
+ Watch out for OS's which are not fully symmetric. Affinitying
dataserver engines onto CPUs that are heavily used by the OS can
seriously degrade performance. Benchmark with your application to find
optimal binding.
+ Increase priority of dataserver engines, if supported
+ Give ASE the opportunity to do more work. If ASE has nothing to do, it
will voluntarily yield the CPU.
+ Watch out for OS's which externalize their async drivers. They need to
run too!
+ Use of OS Monitors to Verify Resource Usage
+ The OS CPU monitors only "know" that an instruction is being executed.
With ASE's own threading and scheduling, it can routinely be 90% idle
when the OS thinks its 90% busy. SQL Monitor shows real CPU usage.
+ Look into high disk I/O wait time or I/O queue lengths. These indicate
physical saturation points in the I/O subsystem or poor data
+ Disk Utilization above 50% may be subject to queuing effects which
often manifest themselves as uneven response times.
+ Look into high system call counts which may be symptomatic of problems.
+ Look into high context switch counts which may also be symptomatic of
+ Optimize your kernel for ASE (minimal OS file buffering, adequate
network buffers, appropriate KEEPALIVE values, etc).
+ Use OS Monitors and SQL Monitor to Determine Bottlenecks
+ Most likely "Non-Application" contention points include:
Resource Where to Look
--------- --------------
CPU Performance SQL Monitor - CPU and Trends

Physical I/O Subsystem OS Monitoring tools - iostat, sar...

Transaction Log SQL Monitor - Device I/O and
Device Hit Rate
on Log Device

ASE Network Polling SQL Monitor - Network and Benchmark

Memory SQL Monitor - Data and Cache

+ Use of Vendor-support Striping such as LVM and RAID
+ These technologies provide a very simple and effective mechanism of
load balancing I/O across physical devices and channels.
+ Use them provided they support asynchronous I/O and reliable writes.
+ These approaches do not eliminate the need for Sybase segments to
ensure minimal contention for internal resources.
+ Non-read-only environments should expect performance degradations when
using RAID levels other than level 0. These levels all include fault
tolerance where each write requires additional reads to calculate a
"parity" as well as the extra write of the parity data.

Hardware Configuration Issues

+ Number of CPUs
+ Use information from SQL Monitor to assess ASE's CPU usage.
+ In SMP environments, dedicate at least one CPU for the OS.
+ Advantages and scaling of VSA is application-dependent. VSA was
architected with large multi-user systems in mind.
+ I/O Subsystem Configuration
+ Look into high Disk I/O Wait Times or I/O Queue Lengths. These may
indicate physical I/O saturation points or poor data distribution.
+ Disk Utilization above 50% may be subject to queuing effects which
often manifest themselves as uneven response times.
+ Logical Volume configurations can impact performance of operations such
as create database, create index, and bcp. To optimize for these
operations, create Logical Volumes such that they start on different
channels / disks to ensure I/O is spread across channels.
+ Discuss device and controller throughput with hardware vendors to
ensure channel throughput high enough to drive all devices at maximum

General ASE Tuning

+ Changing Values with sp_configure or buildmaster

It is imperative that you only use sp_configure to change those
parameters that it currently maintains because the process of
reconfiguring actually recalculates a number of other buildmaster
parameters. Using the Buildmaster utility to change a parameter
"managed" by sp_configure may result in a mis-configured server and
cause adverse performance or even worse ...

+ Sizing Procedure Cache
o ASE maintains an MRU-LRU chain of stored procedure query plans. As
users execute sprocs, ASE looks in cache for a query plan to use.
However, stored procedure query plans are currently not re-entrant!
If a query plan is available, it is placed on the MRU and execution
begins. If no plan is in memory, or if all copies are in use, a new
copy is read from the sysprocedures table. It is then optimized and
put on the MRU for execution.
o Use dbcc memusage to evaluate the size and number of each sproc
currently in cache. Use SQL Monitor's cache statistics to get your
average cache hit ratio. Ideally during production, one would hope
to see a high hit ratio to minimize the procedure reads from disk.
Use this information in conjuction with your desired hit ratio to
calculate the amount of memory needed.
+ Memory
o Tuning memory is more a price/performance issue than anything else
! The more memory you have available, the greater than probability
of minimizing physical I/O. This is an important goal though. Not
only does physical I/O take significantly longer, but threads doing
physical I/O must go through the scheduler once the I/O completes.
This means that work on behalf of the thread may not actually
continue to execute for quite a while !
o There are no longer (as of v4.8) any inherent limitations in ASE
which cause a point of diminishing returns on memory size.
o Calculate Memory based on the following algorithm :

Total Memory = Dataserver Executable Size (in bytes) +
Static Overhead of 1 Mb +
User Connections x 40,960 bytes +
Open Databases x 644 bytes +
Locks x 32 bytes +
Devices x 45,056 bytes +
Procedure Cache +
Data Cache

+ Recovery Interval
o As users change data in ASE, only the transaction log is written to
disk right away for recoverability. "Dirty" data and index pages
are kept in cache and written to disk at a later time. This
provides two major benefits:
1. Many transactions may change a page yet only one physical write
is done
2. ASE can schedule the physical writes "when appropriate"
o ASE must eventually write these "dirty" pages to disk.
o A checkpoint process wakes up periodically and "walks" the cache
chain looking for dirty pages to write to disk
o The recovery interval controls how often checkpoint writes dirty
+ Tuning Recovery Interval
o A low value may cause unnecessary physical I/O lowering throughput
of the system. Automatic recovery is generally much faster during
o A high value minimizes unnecessary physical I/O and helps
throughput of the system. Automatic recovery may take substantial
time during boot-up.

Audit Performance Tuning for v10.0

+ Potentially as Write Intensive as Logging
+ Isolate Audit I/O from other components.
+ Since auditing nearly always involves sequential writes, RAID Level 0
disk striping or other byte-level striping technology should provide
the best performance (theoretically).
+ Size Audit Queue Carefully
+ Audit records generated by clients are stored in an in memory audit
queue until they can be processed.
+ Tune the queue's size with sp_configure "audit queue size", nnnn (in
+ Sizing this queue too small will seriously impact performance since all
user processes who generate audit activity will sleep if the queue
fills up.
+ Size Audit Database Carefully
+ Each audit row could require up to 416 bytes depending on what is
+ Sizing this database too small will seriously impact performance since
all user processes who generate audit activity will sleep if the
database fills up.

Back to top


1.5.2: Temp Tables and OLTP


(Note from Ed: It appears that with ASE 12, Sybase have solved the problem of
select/into locking the system tables for the duration of the operation. The
operation is now split into two parts, the creation of the table followed byt
the insert. The system tables are only locked for the first part, and so, to
all intents and purposes, the operation acts like a create/insert pair whilst
remaining minimally logged.

Our shop would like to inform folks of a potential problem when using temporary
tables in an OLTP environment. Using temporary tables dynamically in a OLTP
production environment may result in blocking (single-threading) as the number
of transactions using the temporary tables increases.

Does it affect my application?

This warning only applies for SQL that is being invoked frequently in an OLTP
production environment, where the use of "select into..." or "create table #
temp" is common. Application using temp tables may experience blocking problems
as the number of transactions increases.

This warning does not apply to SQL that may be in a report or that is not used
frequently. Frequently is defined as several times per second.

Why? Why? Why?

Our shop was working with an application owner to chase down a problem they
were having during peak periods. The problem they were having was severe
blocking in tempdb.

What was witnessed by the DBA group was that as the number of transactions
increased on this particular application, the number of blocks in tempdb also

We ran some independent tests to simulate a heavily loaded server and
discovered that the data pages in contention were in tempdb's syscolumns table.

This actually makes sense because during table creation entries are added to
this table, regardless if it's a temporary or permanent table.

We ran another simulation where we created the tables before the stored
procedure used it and the blocks went away. We then performed an additional
test to determine what impact creating temporary tables dynamically would have
on the server and discovered that there is a 33% performance gain by creating
the tables once rather than re-creating them.

Your mileage may vary.

How do I fix this?

To make things better, do the 90's thing -- reduce and reuse your temp tables.
During one application connection/session, aim to create the temp tables only

Let's look at the lifespan of a temp table. If temp tables are created in a
batch within a connection, then all future batches and stored procs will have
access to such temp tables until they're dropped; this is the reduce and reuse
strategy we recommend. However, if temp tables are created in a stored proc,
then the database will drop the temp tables when the stored proc ends, and this
means repeated and multiple temp table creations; you want to avoid this.

Recode your stored procedures so that they assume that the temporary tables
already exist, and then alter your application so that it creates the temporary
tables at start-up -- once and not every time the stored procedure is invoked.

That's it! Pretty simple eh?


The upshot is that you can realize roughly a 33% performance gain and not
experience the blocking which is difficult to quantify due to the specificity
of each application.

Basically, you cannot lose.

Solution in pseudo-code

If you have an application that creates the same temp table many times within
one connection, here's how to convert it to reduce and reuse temp table
creations. Raymond Lew has supplied a detailed example for trying this.


open connection
loop until time to go
exec procedure vavoom_often
/* vavoom_often creates and uses #gocart for every call */
/* eg: select * into #gocart from gocart */
close connection


open connection
/* Create the temporary table outside of the sproc */
select * into #gocart from gocart where 1 =2 ;
loop until time to go
exec procedure vavoom_often
/* vavoom_often reuses #gocart which */
/* was created before exec of vavoom_often */
/* - First statement may be a truncate table #gocart */
/* - Execute with recompile */
/* if your table will have more than 10 data pages */
/* as the optimizer will assume 10 data pages for temp tables */
close connection

Note that it is necessary to call out the code to create the table and it
becomes a pain in the butt because the create-table statement will have to be
replicated in any stored proc and in the initialization part of the application
- this can be a maintenance nuisance. This can be solved by using any macro
package such as m4 or cpp. or by using and adapting the scripts from Raymond


Brian Black posted a stronger notice than this to the SYBASE-L list, and I
would agree, that any use of select/into in a production environments should
looked at very hard. Even in DSS environments, especially if they share tempdb
with an OLTP environment, should use select/into with care.


From: Raymond Lew

At our company, we try to keep the database and the application loosely coupled
to allow independent changes at the frontend or the backend as long as the
interface stays the same. Embedding temp table definitions in the frontend
would make this more difficult.

To get away from having to embed the temp table definitions in the frontend
code, we are storing the temp table definitions in the database. The frontend
programs retrieve the definitions and declare the tables dynamically at the
beginning of each session. This allows for the change of backend procedures
without changes in the frontend when the API does not change.

Enclosed below are three scripts. The first is an isql script to create the
tables to hold the definitions. The second is a shell script to set up a sample
procedure named vavoom. The third is shell script to demonstrate the structure
of application code.

I would like to thank Charles Forget and Gordon Rees for their assistance on
these scripts.

--start of setup------------------------------------------------------
/* Raymond Lew - 1996-02-20 */
/* This isql script will set up the following tables:
gocart - sample table
app_temp_defn - where temp table definitions are stored
app_temp_defn_group - a logical grouping of temp table definitions
for an application function

/* gocart table - sample table*/
drop table gocart
create table gocart
cartname char(10) null
,cartcolor char(30) null
create unique clustered index gocart1 on gocart (cartname)
insert into gocart values ('go1','blue ')
insert into gocart values ('go2','pink ')
insert into gocart values ('go3','green ')
insert into gocart values ('go4','red ')

/* app_temp_defn - definition of temp tables with their indexes */
drop table app_temp_defn
create table app_temp_defn
/* note: temp tables are unique only in first 13 chars */
objectname char(20) not null
,seq_no smallint not null
,defntext char(255) not null
create unique clustered index app_temp_defn1
on app_temp_defn (objectname,seq_no)
insert into app_temp_defn
values ('#gocart',1,'select * into #gocart')
insert into app_temp_defn
values ('#gocart',2,' from gocart where 1=2 ')
insert into app_temp_defn
values ('#gocartindex',1,
"create unique index gocartindex on #gocart (cartname) ")
insert into app_temp_defn
values ('#gocart1',1, 'select * into #gocart1 from gocart where 1=2')

/* app_temp_defn_group - groupings of temp definitions by applications */
drop table app_temp_defn_group
create table app_temp_defn_group
appname char(8) not null
,objectname char(20) not null
create unique clustered index app_temp_defn_group1
on app_temp_defn_group (appname,objectname)
insert into app_temp_defn_group values('abc','#gocart')
insert into app_temp_defn_group values('abc','#gocartindex')

/* get_temp_defn - proc for getting the temp defn by group */
drop procedure get_temp_defn
create procedure get_temp_defn
@appname char(8)

if @appname = ''
select defntext
from app_temp_defn
order by objectname, seq_no
select defntext
from app_temp_defn a
, app_temp_defn_group b
where a.objectname = b.objectname
and b.appname = @appname
order by a.objectname, a.seq_no


/* let's try some tests */
exec get_temp_defn ''
exec get_temp_defn 'abc'
--end of setup --------------------------------------------------

--- start of make.vavoom --------------------------------------------
# Raymond Lew - 1996-02-20
# bourne shell script for creating stored procedures using
# app_temp_defn table
# demo procedure vavoom created here
# note: you have to change the passwords, id and etc. for your site
# note: you might have to some inline changes to make this work
# check out the notes within the body

# get the table defn's into a text file
# note: next line :you will need to end the line immediately after eot \
isql -Ukryten -Pjollyguy -Sstarbug -w255 << eot \
| grep -v '\-\-\-\-' | grep -v 'defntext ' | grep -v ' affected' > tabletext
exec get_temp_defn ''
# note: prev line :you will need to have a newline immediately after eot

# go mess around in vi
vi tabletext

# create the proc vavoom after running the temp defn's into db
isql -Ukryten -Pjollyguy -Sstarbug -e << eot |more
`cat tabletext`
drop procedure vavoom
create procedure vavoom
@color char(10)
truncate table #gocart1 /* who knows what lurks in temp tables */
if @color = ''
insert #gocart1 select * from gocart
insert #gocart1 select * from gocart where cartcolor=@color
select @color '@color', * from #gocart1
exec vavoom ''
exec vavoom 'blue'
# note: prev line :you will need to have a newline immediately after eot

# end of unix script
--- end of make.vavoom --------------------------------------------

--- start of -------------------------------------------
# Raymond Lew 1996-02-01
# test script: demonstrate with a bourne shell how an application
# would use the temp table definitions stored in the database
# note: you must run setup and make.vavoom first
# note: you have to change the passwords, id and etc. for your site
# note: you might have to some inline changes to make this work
# check out the notes within the body

# get the table defn's into a text file
# note: next line :you will need to end the line immediately after eot \
isql -Ukryten -Pjollyguy -Sstarbug -w255 << eot \
| grep -v '\-\-\-\-' | grep -v 'defntext ' | grep -v ' affected' > tabletext
exec get_temp_defn ''
# note: prev line :you will need to have a newline immediately after eot

# go mess around in vi
vi tabletext

isql -Ukryten -Pjollyguy -Sstarbug -e << eot | more
`cat tabletext`
exec vavoom ''
exec vavoom 'blue'
# note: prev line :you will need to have a newline immediately after eot

# end of unix script
--- end of -------------------------------------------

That's all, folks. Have Fun

Back to top


1.5.3: Differences between clustered and non-clustered



I'd like to talk about the difference between a clustered and a non-clustered
index. The two are very different and it's very important to understand the
difference between the two to in order to know when and how to use each.

I've pondered hard to find the best analogy that I could think of and I've come
up with ... the phone book. Yes, a phone book.

Imagine that each page in our phone book is equivalent to a Sybase 2K data
page. Every time we read a page from our phone book it is equivalent to one
disk I/O.

Since we are imagining, let's also imagine that our mythical ASE (that runs
against the phone book) has only enough data cache to buffer 200 phone pages.
When our data cache gets full we have to flush an old page out so we can read
in a new one.

Fasten your seat belts, because here we go...

Clustered Index

A phone book lists everyone by last name. We have an A section, we have a B
section and so forth. Within each section my phone book is clever enough to
list the starting and ending names for the given page.

The phone book is clustered by last name.

create clustered index on phone_book (last_name)

It's fast to perform the following queries on the phone book:

* Find the address of those whose last name is Cisar.
* Find the address of those whose last name is between Even and Fa

Searches that don't work well:

* Find the address of those whose phone number is 440-1300.
* Find the address of those whose prefix is 440

In order to determine the answer to the two above we'd have to search the
entire phone book. We can call that a table scan.

Non-Clustered Index

To help us solve the problem above we can build a non-clustered index.

create nonclustered index on phone_book (phone_number)

Our non-clustered index will be built and maintained by our Mythical ASE as

1. Create a data structure that will house a phone_number and information
where the phone_number exists in the phone book: page number and the row
within the page.

The phone numbers will be kept in ascending order.

2. Scan the entire phone book and add an entry to our data structure above for
each phone number found.
3. For each phone number found, note along side it the page number that it was
located and which row it was in.

any time we insert, update or delete new numbers, our M-ASE will maintain this
secondary data structure. It's such a nice Server.

Now when we ask the question:

Find the address of those whose phone number is 440-1300

we don't look at the phone book directly but go to our new data structure and
it tells us which page and row within the page the above phone number can be
found. Neat eh?

Draw backs? Well, yes. Because we probably still can't answer the question:

Find the address of those whose prefix is 440

This is because of the data structure being used to implement non-clustered
indexes. The structure is a list of ordered values (phone numbers) which point
to the actual data in the phone book. This indirectness can lead to trouble
when a range or a match query is issued.

The structure may look like this:

|Phone Number | Page Number/Row |
| 440-0000 | 300/23 |
| 440-0001 | 973/45 |
| 440-0002 | 23/2 |
| ... | |
| 440-0030 | 973/45 |
| 440-0031 | 553/23 |
| ... | |

As one can see, certain phone numbers may map to the same page. This makes
sense, but we need to consider one of our constraints: our Server only has room
for 200 phone pages.

What may happen is that we re-read the same phone page many times. This isn't a
problem if the phone page is in memory. We have limited memory, however, and we
may have to flush our memory to make room for other phone pages. So the
re-reading may actually be a disk I/O.

The Server needs to decide when it's best to do a table scan versus using the
non-clustered index to satisfy mini-range type of queries. The way it decides
this is by applying a heuristic based on the information maintained when an
update statistics is performed.

In summary, non-clustered indexes work really well when used for highly
selective queries and they may work for short, range type of queries.

Suggested Uses

Having suffered many table corruption situations (with 150 ASEs who wouldn't? :
-)), I'd say always have a clustered index. With a clustered index you can fish
data out around the bad spots on the table thus having minimal data loss.

When you cluster, build the cluster to satisfy the largest percentage of range
type queries. Don't put the clustered index on your primary key because
typically primary keys are increasing linearly. What happens is that you end up
inserting all new rows at the end of the table thus creating a hot spot on the
last data page.

For detail rows, create the clustered index on the commonly accessed foreign
key. This will aid joins from the master to it.

Use nonclustered index to aid queries where your selection is very selective.
For example, primary keys. :-)

Back to top


1.5.4: Optimistic versus Pessimistic locking?


This is the same problem another poster had ... basically locking a record to
ensure that it hasn't changed underneath ya. has a pretty nifty solution if you are using ct-lib (I'll
include that below -- hope it's okay Francisco ... :-)) ...

Basically the problem you are facing is one of being a pessimist or an

I contend that your business really needs to drive this.

Most businesses (from my experience) can be optimistic.

That is, if you are optimistic that the chances that someone is going to change
something from underneath the end-user is low, then do nothing about it.

On the other hand, if you are pessimistic that someone may change something
underneath the end-user, you can solve it at least as follows:

Solution #1

Use a timestamp on a header table that would be shared by the common data. This
timestamp field is a Sybase datatype and has nothing to do with the current
time. Do not attempt to do any operations on this column other than
comparisons. What you do is when you grab data to present to the end-user, have
the client software also grab the timestamp column value. After some thing
time, if the end-user wishes to update the database, compare the client
timestamp with what's in the database and it it's changed, then you can take
appropriate action: again this is dictated by the business.

Problem #1

If users are sharing tables but columns are not shared, there's no way to
detect this using timestamps because it's not sufficiently granular.

Solution #2 (presented by fcasas)

... Also are you coding to ct-lib directly? If so there's something that you
could have done, or may still be able to do if you are using cursors.

With ct-lib there's a ct_describe function that lets you see key data. This
allows you to implement optimistic locking with cursors and not need
timestamps. Timestamps are nice, but they are changed when any column on a row
changes, while the ct_describe mechanism detects changes at the columns level
for a greater degree of granularity of the change. In other words, the
timestamp granularity is at the row, while ct_describes CS_VERSION_KEY provides
you with granularity at the column level.

Unfortunately this is not well documented and you will have to look at the
training guide and the manuals very closely.

Further if you are using cursors do not make use of the

[for {read only | update [of column_name_list]}]

of the select statement. Omitting this clause will still get you data that can
still be updated and still only place a shared lock on the page. If you use the
read only clause you are acquiring shared locks, but the cursor is not
updatable. However, if you say

update [of ...

will place updated locks on the page, thus causing contention. So, if you are
using cursors don't use the above clause. So, could you answer the following
three questions:

1. Are you using optimistic locking?
2. Are you coding to ct-lib?
3. Are you using cursors?

Problem #2

You need to be coding with ct-lib ...

Solution #3

Do nothing and be optimistic. We do a lot of that in our shop and it's really
not that big of a problem.

Problem #3

Users may clobber each other's changes ... then they'll come looking for you to
clobber you! :-)

Back to top


1.5.5: How do I force an index to be used?


System 11

In System 11, the binding of the internal ordinal value is alleviated so that
instead of using the ordinal index value, the index name can be used instead:

select ... from my_table (index my_first_index)

Sybase 4.x and Sybase System 10

All indexes have an ordinal value assigned to them. For example, the following
query will return the ordinal value of all the indexes on my_table:

select name, indid
from sysindexes
where id = object_id("my_table")

Assuming that we wanted to force the usuage of index numbered three:

select ... from my_table(3)

Note: using a value of zero is equivalent to forcing a table scan. Whilst this
sounds like a daft thing to do, sometimes a table scan is a better solution
than heavy index scanning.

It is essential that all index hints be well documented. This is good DBA
practice. It is especially true for Sybase System 10 and below.

One scheme that I have used that works quite well is to implement a table
similar to sysdepends in the database that contains the index hints.

create table idxdepends
tblname varchar(32) not null -- Table being hinted
,depname varchar(50) not null -- Proc, trigger or app that
-- contains hint.
,idxname varchar(32) not null -- Index being hinted at
--,hintcount int null -- You may want to count the
-- number of hints per proc.

Obviously it is a manual process to keep the table populated, but it can save a
lot of trouble later on.

Back to top


1.5.6: Why place tempdb and log on low numbered devices?


System 10 and below.

In System 10 and Sybase 4.X, the I/O scheduler starts at logical device (ldev)
zero and works up the ldev list looking for outstanding I/O's to process.
Taking this into consideration, the following device fragments (disk init)
should be added before any others:

1. tempdb
2. log

Back to top


David Owen

Apr 20, 2004, 9:45:05 AM4/20/04
Archive-name: databases/sybase-faq/part6

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Advanced ASE Administration

1.3.1 How do I clear a log suspend'd connection?
1.3.2 What's the best value for cschedspins?
1.3.3 What traceflags are available?
1.3.4 How do I use traceflags 5101 and 5102?
1.3.5 What is cmaxpktsz good for?
1.3.6 What do all the parameters of a buildmaster -d<device> -yall mean?
1.3.7 What is CIS and how do I use it?
1.3.8 If the master device is full how do I make the master database
1.3.9 How do I run multiple versions of Sybase on the same server?
1.3.10 How do I capture a process's SQL?

General Troubleshooting User Database Administration ASE FAQ


1.3.1 How to clear a log suspend


A connection that is in a log suspend state is there because the transaction
that it was performing couldn't be logged. The reason it couldn't be logged is
because the database transaction log is full. Typically, the connection that
caused the log to fill is the one suspended. We'll get to that later.

In order to clear the problem you must dump the transaction log. This can be
done as follows:

dump tran db_name to data_device

At this point, any completed transactions will be flushed out to disk. If you
don't care about the recoverability of the database, you can issue the
following command:

dump tran db_name with truncate_only

If that doesn't work, you can use the with no_log option instead of the with

After successfully clearing the log the suspended connection(s) will resume.

Unfortunately, as mentioned above, there is the situation where the connection
that is suspended is the culprit that filled the log. Remember that dumping the
log only clears out completed transaction. If the connection filled the log
with one large transaction, then dumping the log isn't going to clear the

System 10

What you need to do is issue an ASE kill command on the connection and then
un-suspend it:

select lct_admin("unsuspend", db_id("db_name"))

System 11

See Sybase Technical News Volume 6, Number 2

Retaining Pre-System 10 Behaviour

By setting a database's abort xact on log full option, pre-System 10 behaviour
can be retained. That is, if a connection cannot log its transaction to the log
file, it is aborted by ASE rather than suspended.

Return to top


1.3.2 What's the best value for cschedspins?


It is crucial to understand that cschedspins is a tunable parameter
(recommended values being between 1-2000) and the optimum value is completely
dependent on the customer's environment. cschedspins is used by the scheduler
only when it finds that there are no runnable tasks. If there are no runnable
tasks, the scheduler has two options:

1. Let the engine go to sleep (which is done by an OS call) for a specified
interval or until an event happens. This option assumes that tasks won't
become runnable because of tasks executing on other engines. This would
happen when the tasks are waiting for I/O more than any other resource such
as locks. Which means that we could free up the CPU resource (by going to
sleep) and let the system use it to expedite completion of system tasks
including I/O.
2. Go and look for a ready task again. This option assumes that a task would
become runnable in the near term and so incurring the extra cost of an OS
context switch through the OS sleep/wakeup mechanism is unacceptable. This
scenario assumes that tasks are waiting on resources such as locks, which
could free up because of tasks executing on other engines, more than they
wait for I/O.

cschedspins controls how many times we would choose option 2 before choosing
option 1. Setting cschedspins low favours option 1 and setting it high favours
option 2. Since an I/O intensive task mix fits in with option 1, setting
cschedspins low may be more beneficial. Similarly since a CPU intensive job mix
favours option 2, setting cschedspins high may be beneficial.

The consensus is that a single CPU server should have cschedspins set to 1.
However, I strongly recommend that users carefully test values for cschedspins
and monitor the results closely. I have seen more than one site that has shot
themselves in the foot so to speak due to changing this parameter in production
without a good understanding of their environment.

Return to top


1.3.3 Trace Flag Definitions


To activate trace flags, add them to the RUN_* script. The following example is
using the 1611 and 260 trace flags. Note that there is no space between the
'-T' and the traceflag, despite what is written in some documentation.

Use of these traceflags is not recommended by Sybase. Please use at your
own risk.

% cd ~sybase/install
# SQL Server Information:
# name: BLAND
# master device: /usr/sybase/dbf/BLAND/master.dat
# master device size: 25600
# errorlog: /usr/sybase/install/errorlog_BLAND
# interfaces: /usr/sybase
/usr/sybase/dataserver -d/usr/sybase/dbf/BLAND/master.dat \
-sBLAND -e/usr/sybase/install/errorlog_BLAND -i/usr/sybase \
-T1611 -T260

Trace Flags
| | |
| Flag | Description |
| 108 | (Documented) To allow dynamic and host variables in create view |
| | statements in ASE 12.5 and above. |
| 200 | Displays messages about the before image of the query-tree. |
| 201 | Displays messages about the after image of the query-tree. |
| 241 | Compress all query-trees whenever the SQL dataserver is started. |
| | Reduce TDS (Tabular Data Stream) overhead in stored procedures. Turn |
| | off done-in-behaviour packets. Do not use this if your application |
| | is a ct-lib based application; it'll break. |
| 260 | |
| | Why set this on? Glad you asked, typically with a db-lib application |
| | a packet is sent back to the client for each batch executed within a |
| | stored procedure. This can be taxing in a WAN/LAN environment. |
| | Changes the hierarchy and casting of datatypes to pre-11.5.1 |
| | behaviour. There was an issue is some very rare cases where a wrong |
| | result could occur, but that's been cleared up in 11.9.2 and above. |
| | |
| 291 | The trace can be used at boot time or at the session level. Keep in |
| | mind that it does not disqualify a table scan from occurring. What |
| | it will do is result in fewer datatype mismatch situations and thus |
| | the optimizer will be able to estimate the costs of SARGs and joins |
| | on columns involved in a mismatch. |
| 299 | This trace flag instructs the dataserver to not recompile a child |
| | stored procedure that inherits a temp table from a parent procedure. |
| 302 | Print information about the optimizer's index selection. |
| 303 | Display OR strategy |
| | Revert special or optimizer strategy to that strategy used in |
| 304 | pre-System 11 (this traceflag resolved several bug issues in System |
| | 11, most of these bugs are fixed in ASE |
| 310 | Print information about the optimizer's join selection. |
| 311 | Display the expected IO to satisfy a query. Like statistics IO |
| | without actually executing. |
| 317 | Provide extra optimization information. |
| 319 | Reformatting strategies. |
| 320 | Turn off the join order heuristic. |
| 324 | Turn off the like optimization for ad-hoc queries using |
| | @local_variables. |
| | (Only valid in ASE versions prior to 11.9.2.) Instructs the server |
| | to use arithmetic averaging when calculating density instead of a |
| 326 | geometric weighted average when updating statistics. Useful for |
| | building better stats when an index has skew on the leading column. |
| | Use only for updating the stats of a table/index with known skewed |
| | data. |
| | |
| 602 | Prints out diagnostic information for deadlock prevention. |
| 603 | Prints out diagnostic information when avoiding deadlock. |
| 699 | Turn off transaction logging for the entire SQL dataserver. |
| 1204 | Send deadlock detection to the errorlog. |
| * | |
| 1205 | Stack trace on deadlock. |
| 1206 | Disable lock promotion. |
| 1603 | Use standard disk I/O (i.e. turn off asynchronous I/O). |
| * | |
| 1605 | Start secondary engines by hand |
| | Create a debug engine start file. This allows you to start up a |
| | debug engine which can access the server's shared memory for running |
| | diagnostics. I'm not sure how useful this is in a production |
| 1606 | environment as the debugger often brings down the server. I'm not |
| | sure if Sybase have ported the debug stuff to 10/11. Like most of |
| | their debug tools it started off quite strongly but was never |
| | developed. |
| | Startup only engine 0; use dbcc engine("online") to incrementally |
| 1608 | bring up additional engines until the maximum number of configured |
| | engines. |
| 1610 | Boot the SQL dataserver with TCP_NODELAY enabled. |
| * | |
| 1611 | If possible, pin shared memory -- check errorlog for success/ |
| * | failure. |
| 1613 | Set affinity of the SQL dataserver engine's onto particular CPUs -- |
| | usually pins engine 0 to processor 0, engine 1 to processor 1... |
| 1615 | SGI only: turn on recoverability to filesystem devices. |
| | Linux only: Revert to using cached filesystem I/O. By default, ASE |
| 1625 | on Linux (11.9.2 and above) opens filesystem devices using O_SYNC, |
| | unlike other Unix based releases, which means it is safe to use |
| | filesystems devices for production systems. |
| 2512 | Prevent dbcc from checking syslogs. Useful when you are constantly |
| | getting spurious allocation errors. |
| 3300 | Display each log record that is being processed during recovery. You |
| | may wish to redirect stdout because it can be a lot of information. |
| 3500 | Disable checkpointing. |
| 3502 | Track checkpointing of databases in errorlog. |
| 3601 | Stack trace when error raised. |
| 3604 | Send dbcc output to screen. |
| 3605 | Send dbcc output to errorlog. |
| 3607 | Do not recover any database, clear behaviour start up checkpoint |
| | process. |
| 3608 | Recover master only. Do not clear tempdb or start up checkpoint |
| | process. |
| 3609 | Recover all databases. Do not clear tempdb or start up checkpoint |
| | process. |
| 3610 | Pre-System 10 behaviour: divide by zero to result in NULL instead of |
| | error - also see Q6.2.5. |
| 3620 | Do not kill infected processes. |
| 4001 | Very verbose logging of each login attempt to the errorlog. Includes |
| | tons of information. |
| 4012 | Don't spawn chkptproc. |
| 4013 | Place a record in the errorlog for each login to the dataserver. |
| 4020 | Boot without recover. |
| | Forces all I/O requests to go through engine 0. This removes the |
| 5101 | contention between processors but could create a bottleneck if |
| | engine 0 becomes busy with non-I/O tasks. For more information... |
| | 5101/5102. |
| 5102 | Prevents engine 0 from running any non-affinitied tasks. For more |
| | information...5101/5102. |
| 7103 | Disable table lock promotion for text columns. |
| 8203 | Display statement and transaction locks on a deadlock error. |
| * | Starting with System 11 these are sp_configure'able |

Return to top


1.3.4 Trace Flags -- 5101 and 5102



Normally, each engine issues and checks for its own Disk I/O on behalf of the
tasks it runs. In completely symmetric operating systems, this behavior
provides maximum I/O throughput for ASE. Some operating systems are not
completely symmetric in their Disk I/O routines. For these environments, the
server can be booted with the 5101 trace flag. While tasks still request disk I
/O from any engine, the actual request to/from the OS is performed by engine 0.
The performance benefit comes from the reduced or eliminated contention on the
locking mechanism inside the OS kernel. To enable I/O affinity to engine 0,
start ASE with the 5101 Trace Flag.

Your errorlog will indicate the use of this option with the message:

Disk I/O affinitied to engine: 0

This trace flag only provides performance gains for servers with 3 or more
dataserver engines configured and being significantly utilized.

Use of this trace flag with fully symmetric operating systems will degrade


The 5102 trace flag prevents engine 0 from running any non-affinitied tasks.
Normally, this forces engine 0 to perform Network I/O only. Applications with
heavy result set requirements (either large results or many connections issuing
short, fast requests) may benefit. This effectively eliminates the normal
latency for engine 0 to complete running its user thread before it issues the
network I/O to the underlying network transport driver. If used in conjunction
with the 5101 trace flag, engine 0 would perform all Disk I/O and Network I/O.
For environments with heavy disk and network I/O, engine 0 could easily
saturate when only the 5101 flag is in use. This flag allows engine 0 to
concentrate on I/O by not allowing it to run user tasks. To force task affinity
off engine 0, start ASE with the 5102 Trace Flag.

Your errorlog will indicate the use of this option with the message:

I/O only enabled for engine: 0

Warning: Not supported by Sybase. Provided here for your enjoyment.

Return to top


1.3.5 What is cmaxpktsz good for?


cmaxpktsz corresponds to the parameter "maximum network packet size" which you
can see through sp_configure. I recommend only updating this value through
sp_configure. If some of your applications send or receive large amounts of
data across the network, these applications can achieve significant performance
improvement by using larger packet sizes. Two examples are large bulk copy
operations and applications reading or writing large text or image values.
Generally, you want to keep the value of default network packet size small for
users performing short queries, and allow users who send or receive large
volumes of data to request larger packet sizes by setting the maximum network
packet size configuration variable.

caddnetmem corresponds to the parameter "additional netmem" which you can see
through sp_configure. Again, I recommend only updating this value through
sp_configure. "additional netmem" sets the maximum size of additional memory
that can be used for network packets that are larger than ASE's default packet
size. The default value for additional netmem is 0, which means that no extra
space has been allocated for large packets. See the discussion below, under
maximum network packet size, for information on setting this configuration
variable. Memory allocated with additional netmem is added to the memory
allocated by memory. It does not affect other ASE memory uses.

ASE guarantees that every user connection will be able to log in at the default
packet size. If you increase maximum network packet size and additional netmem
remains set to 0, clients cannot use packet sizes that are larger than the
default size: all allocated network memory will be reserved for users at the
default size. In this situation, users who request a large packet size when
they log in receive a warning message telling them that their application will
use the default size. To determine the value for additional netmem if your
applications use larger packet sizes:

* Estimate the number of simultaneous users who will request the large packet
sizes, and the sizes their applications will request.
* Multiply this sum by three, since each connection needs three buffers.
* Add 2% for overhead, rounded up to the next multiple of 512

Return to top


1.3.6 Buildmaster Configuration Definitions


Attention! Please notice, be very careful with these parameters. Use only
at your own risk. Be sure to have a copy of the original parameters. Be
sure to have a dump of all dbs (include master) handy.

Since the release of 11.x (and above), there is almost no need for
buildmaster to configure parameters. In fact, buildmaster has gone been
removed from ASE 12.5. This section is really kept for anyone out there
running old versions of ASE. I still see the odd post from people asking
about 4.9.2, so this is for you.

Anyone else who feels a need to use buildmaster should check sp_configure
and/or SERVERNAME.cfg to see if the configuration parameter is there before
using buildmaster.



The following is a list of configuration parameters and their effect on the
ASE. Changes to these parameters can affect performance of the server. Sybase
does not recommend modifying these parameters without first discussing the
change with Sybase Tech Support. This list is provided for information only.

These are categorized into two kinds:

* Configurable through sp_configure and
* not configurable but can be changed through 'buildmaster -y<variable>=value

Configurable variables:


The recovery interval specified in minutes.


A flag to inform whether system catalogs can be updated or not.


This is the number of user connections allowed in SQL
Server. This value + 3 (one for checkpoint, network
and mirror handlers) make the number of pss configured
in the server.


Number of PSS configured in the server. This value will
always be 3 more than cusrconnections. The reason is we
need PSS for checkpoint, network and mirror handlers.



The total memory configured for the Server in 2k
units. This is the memory the server will use for both
Server and Kernel Structures. For Stratus or any 4k
pagesize implementation of ASE, certain values
will change as appropriate.


This is the number of databases that can be open in SQL
Server at any given time.


Variable that defines and controls the number of logical
locks configured in the system.


This is the number of open objects that can be open at
a given point of time.


This is the percentage of cache that should be used
for procedures to be cached in.


Fill factor for indexes.


This value is in units of milli-seconds. This value determines
how much time a task is allowed to run before it yields.
This value is internally converted to ticks. See below
the explanations for cclkrate, ctimemax etc.


The default size of the database when it is created.
This value is Megabytes and the default is 2Meg.


An outdated not used variable.


A toggle flag which will display certain recovery information
during database recoveries.


An informational variable that stores the serial number
of the product.


Flag that controls whether nested triggers allowed or not.


Variable that controls the number of device structures
that are allocated which affects the number of devices
that can be opened during server boot up. If user
defined 20 devices and this value is configured to be
10, during recovery only 10 devices will be opened and
the rest will get errors.
This variable controls maximum number of site handler
structures that will be allocated. This in turn
controls the number of site handlers that can be
active at a given instance.
This variable controls the number of remote buffers
that needs to send and receive from remote sites.
Actually this value should be set to number of
logical connections configured. (See below)
This is the number of logical connections that can
be open at any instance. This value controls
the number of resource structure allocated and
hence it will affect the overall logical connection
combined with different sites. THIS IS NOT PER SITE.


Maximum number of pre-read packets per logical connections.
If logical connection is set to 10, and cfgdatabuf is set
to 3 then the number of resources allocated will be


Version number of last upgrade program ran on this server.


Sort order of ASE.


When sort orders are changed the old sort order is
saved in this variable to be used during recovery
of the database after the Server is rebooted with
the sort order change.


Character Set used by ASE


Same as cold_sortord except it stores the previous
Character Set.


page # of sort order image definition. This should
not be changed at any point. This is a server only


page # of character set image definition. This should
not be changed at any point. This is a server only


page # of previous sort order image definition. This
should not be changed at any point. This is a server
only variable.


page # of previous chracter set image definition. This
should not be changed at any point. This is a server
only variable.


Default language used by ASE.


Maximum number of engines that can be made online. This
number should not be more than the # of cpus available on this
system. On Single CPU system like RS6000 this value is always


Minimum number of engines that should be online. This is 1 by


A noop variable at this time.


Stack size per task configured. This doesn't include the guard
area of the stack space. The guard area can be altered through


This is the size of the guard area. ASE will
allocate stack space for each task by adding cfgstacksz
(configurable through sp_configure) and cguardsz (default is
2K). This has to be a multiple of PAGESIZE which will be 2k
or 4k depending on the implementation.


Size of fixed stack space allocated per task including the
guard area.

Non-configurable values :




1 millisecond = 1/1000th of a second.
1 microsecond = 1/1000000th of a second. "Tick" : Interval between two clock
interrupts occur in real time.

"cclkrate" :

A value specified in microsecond units.
Normally on systems where a fine grained timer is not available
or if the Operating System cannot set sub-second alarms, this
value is set to 1000000 milliseconds which is 1 second. In
other words an alarm will go off every 1 second or you will
get 1 tick per second.

On Sun4 this is set to 100000 milliseconds which will result in
an interrupt going at 1/10th of a second. You will get 6 ticks
per second.

"avetimeslice" :

A value specified in millisecond units.
This is the value given in "sp_configure",<timeslice value>.
Otherwise the milliseconds are converted to milliseconds and
finally to tick values.

ticks = <avetimeslice> * 1000 / cclkrate.

"timeslice" :

The unit of this variable is in ticks.
This value is derived from "avetimeslice". If "avetimeslice"
is less than 1000 milliseconds then timeslice is set to 1 tick.

"ctimemax" :

The unit of this variable is in ticks.

A task is considered in infinite loop if the consumed ticks
for a particular task is greater than ctimemax value. This
is when you get timeslice -201 or -1501 errors.

"cschedspins" :

For more information see Q1.3.2.

This value alters the behavior of ASE scheduler.
The scheduler will either run a qualified task or look
for I/O completion or sleep for a while before it can
do anything useful.

The cschedspins value determines how often the scheduler
will sleep and not how long it will sleep. A low value
will be suited for a I/O bound ASE but a
high value will be suited for CPU bound ASE. Since
ASE will be used in a mixed mode, this value
need to be fined tuned.

Based on practical behavior in the field, a single engine
ASE should have cschedspins set to 1 and a multi-engine
server should have set to 2000.

Now that we've defined the units of these variables what happens when we change
cclkrate ?

Assume we have a cclkrate=100000.

A clock interrupt will occur every (100000/1000000) 1/10th milliseconds.
Assuming a task started with 1 tick which can go up to "ctimemax=1500" ticks
can potentially take 1/10us * (1500 + 1) ticks which will be 150 milliseconds
or approx. .15 milliseconds per task.

Now changing the cclkrate to 75000

A clock interrupt will occur every (75000/1000000) 1/7th milliseconds. Assuming
a task started with 1 tick which can go up to ctimemax=1500 ticks can
potentially take 1/7us * (1500 + 1) ticks which will be 112 milliseconds or
approx. .11 milliseconds per task.

Decreasing the cclkrate value will decrease the time spent on each task. If the
task could not voluntarily yield within the time, the scheduler will kill the

UNDER NO CIRCUMSTANCES the cclkrate value should be changed. The default
ctimemax value should be set to 1500. This is an empirical value and this can
be changed under special circumstances and strictly under the guidance of DSE.



Name of the master device is saved here. This is 64
bytes in length.


This is a derived value from cusrconnections + 3.
See cusrconnections above.


This value defines the number of transactions that
can be done by a task at a given instance.
Changing this value to be more than 32 will have no
effect on the server.
This value defines the number of open tables per
task. This will be typically for a query. This
will be the number of tables specified in a query
including subqueries.

Sybase Advises not to change this value. There
will be significant change in the size of per user
resource in ASE.


This is a derived variable based on the total
memory configured and subtracting different resource
sizes for Databases, Objects, Locks and other
Kernel memories.


This is same as cdesnum. Other values will have no effect on it.


This is a derived value. Based on cpcacheprcnt variable.


This is same as clocknum. Other values will have no effect on it.


This is variable that defines the number of cache protectors per
task. This is used internally by ASE.

Sybase advise not to modify this value as a default of 15 will
be more than sufficient.


This is a derived value based on cusrconnections + <extra> for
Sybase internal tasks that are both visible and non-visible.


This is an internal variable that will keep track of ASE

Modifying this value will not have any effect.


Number of mail box structures that need to be allocated.
More used in VMS environment than UNIX environment.


Used in tandem with cnmbox.


Maximum number of messages that can be passed between mailboxes.


Number of disk I/O request (async and direct) that can be
processed at a given instance. This is a global value for all
the engines and not per engine value.

This value is directly depended on the number of I/O request
that can be processed by the Operating System. It varies
depending on the Operating System.


Maximum number of I/O request that can be processed at any given

Normally cnblkio,cnblkmax and cnmaxaio_server should be the same.


Maximum number of I/O request that can be processed by one engine.
Since engines are Operating System Process, if there is any limit
imposed by the Operating System on a per process basis then
this value should be set. Otherwise it is a noop.


This is the total number of I/O request ASE can do.
This value s directly depended on the number of I/O request
that can be processed by the Operating System. It varies
depending on the Operating System.


not used.


Similar to disk I/O request, this is for network I/O request.
This includes disk/tape dumps also. This value is for
the whole ASE including other engines.


Maximum number of network I/O request including disk/tape dumps.


Maximum number of alarms including the alarms used by
the system. This is typically used when users do "waitfor delay"


Mirror device name for the master device.


Status of mirror devices for the master device like serial/dynamic
mirroring etc.


This value determines the ageing of a index buffer before it
is removed from the cache.


This value determines the aging of a OAM buffer before it
is removed from the cache.


This value determines the number of extents that will be
allocated while doing BCP.


This value determines when to flush buffers in the cache
that are modified.

Return to top


1.3.7: What is CIS and how can I use it?


CIS is the new name for Omni ASE. The biggest difference is that CIS is
included with Adaptive Server Enterprise as standard. Actually, this is not
completely accurate; the ability to connect to other ASEs and ASEs, including
Microsoft's, is included as standard. If you need to connect to DB2 or Oracle
you have to obtain an additional licence.

So, what is it?

CIS is a means of connecting two servers together so that seamless cross-server
joins can be executed. It is not just restricted to selects, pretty much any
operation that can be performed on a local table can also be performed on a
remote table. This includes dropping it, so be careful!

What servers can I connect to?

* Sybase ASE
* Microsoft SQL Server
* Oracle

What are the catches?

Well, nothing truly comes for free. CIS is not a means of providing true load
sharing, although you will find nothing explicitly in the documentation to tell
you this. Obviously there is a performance hit which seems to affect cursors
worst of all. CIS itself is implemented using cursors and this may be part of
the explanation.

OK, so how do I use it?

Easy! Add the remote server using sp_addserver. Make sure that you define it
as type sql_server or ASEnterprise. Create an "existing" table using the
definition of the remote table. Update statistics on this new "existing"
table. Then simply use it in joins exactly as if it were a local table.

Return to top


1.3.8: If the master device is full, how do I make the master database bigger?


It is not possible to extend the master database across another device, so the
following from Eric McGrane (recently of Sybase Product Support Engineering)
should help.

* dump the current master database
* Pre-12.5 users use buildmaster to create a new master device with a larger
size. ASE 12.5 users use dataserver to build the new, larger, master
* start the server in single user mode using the new master device
* login to the server and execute the following tsql:

select * from sysdevices

* take note of the high value
* load the dump of the master you had just taken
* restart the server (as it will be shut down when master is done loading),
in single user mode so that you can update system tables
* login to the server and update sysdevices setting high for master to the
that you noted previously
* shut the server down and start it back up, but this time not in single user

The end result of the above is that you will now have a larger master device
and you can alter your master database to be a larger size. For details about
starting the server in single user mode and how to use buildmaster (if you need
the details) please refer to the documentation.

Return to top


1.3.9: How do I run multiple versions of Sybase on the same server?


The answer to this relies somewhat on the platform that you are using.


ASE Versions Before 12.0

This applies to Unix and variants, Linux included. Install the various releases
of software into logical places within your filesystem. I like to store all
application software below a single directory for ease of maintenance, choose
something like /sw. I know that some are keen on /opt and others /usr/local. It
is all down to preference and server usage. If you have both Oracle and Sybase
on the same server you might want /sw/sybase or /opt/sybase. Be a little
careful here if your platform is Linux or FreeBSD. The standard installation
directories for Sybase on those platforms is /opt/sybase. Finally, have a
directory for the release, say ASE11_9_2 or simply 11.9.2 if you only ever have
Sybase ASE running on this server. A little imagination is called for!

So, now you have a directory such as /sw/sybase/ASE/11.9.2 (my preferred choice
:-), and some software installed under the directories, what now? In the most
minimal form, that is all you need. Non of the environment variables are
essential. You could quite successfully run

/sw/sybase/ASE/11.9.2/bin/isql -Usa -SMYSERV -I/sw/sybase/ASE/11.9.2/interfaces

and get to the server, but that is a lot of typing. By setting the SYBASE
environment variable to /sw/sybase/ASE/11.9.2 you never need tell isql or other
apps where to find the interfaces. Then, you can set the path with a cool


to pick up the correct set of Sybase binaries. That reduces the previous mass
of typing to

isql -Usa -SMYSERV

which is much more manageable.

You can create yourself a couple of shell scripts to do the changes for you. So
if the script a11.9 contained:


# Remember to export the variables!

and a11.0 contained:


# Remember to export the variables!

you would toggle between being connect to and 11.9.2 server and a 12.0 server,
depending upon which one you executed last. The scripts are not at all
sophisticated, you could quite easily have one script and pass a version string
into it. You will notice that the PATH variable gets longer each time the
script is executed. You could add greps to see if there was already a Sybase
instance on the path. Have I mentioned imagination?

ASE 12.0 and Beyond

Sybase dramatically changed the structure of the installation directory tree
with ASE 12. You still have a SYBASE environment variable pointing to the
route, but now the various packages fit below that directory. So, if we take /
sw/sybase as the root directory, we have the following (the following is for a
12.5 installation, but all versions follow the same format):


Below ASE-12_5 is most of the stuff that we have come to expect under $SYBASE,
the install, bin and scripts directories. This is also where the SERVER.cfg
file has moved to. (Note the the interfaces file is still in $SYBASE.) The bin
directory on this side includes the dataserver, diagserver and srvbuild

The OCS-12_5 is the open client software directory. It means that Sybase can
update the client software without unduly affecting the server. isql, bcp and
other clients are to be found here.

It does take a little getting used to if you have been using the pre-12 style
for a number of years. However, in its defence, it is much more logical, even
if it about triples the length of your PATH variable!

That is another good part of the new installation. Sybase actually provides you
with the shell script to do all of this. There is a file in /sw/sybase called (there is an equivalent C shell version in the same place) that sets
everything you need!

Interfaces File

The only real addition to all of the above is an easier way to manage the
interfaces file. As mentioned before, ASE based apps look for the interfaces
file in $SYBASE/interfaces by default. Unix is nice in that it allows you to
have symbolic links that make it appear as if a file is somewhere that it
isn't. Place the real interfaces file somewhere independent of the software
trees. /sw/sybase/ASE/interfaces might be a sound logical choice. Now, cd to
$SYBASE and issue

ln -s /sw/sybase/ASE/interfaces

and the interfaces will appear to exist in the $SYBASE directory, but will in
fact remain in its own home.

Note: make sure that interfaces file is copied to its own home before removing
it from $SYBASE.

Now you can put symbolic links in each and every software installation and only
have to worry about maintaining the server list, on that server, in one place.
Having the interfaces file common to many physical servers is trickier, but not
impossible. Personally I would choose to put it in a central CVS repository and
use that to keep each server reasonably up-to-date.


Firstly, I have tried the following on W2K and it all works OK. I have read a
number of reports of people having difficulty getting clean installs under NT.
11.5 and 12.0 mainly. I cannot remeber having a problem with either of those
myself, but I only ever installed it to test that stuff I write runs on all
platforms. I have no intention of upgrading to XP until MS pays me to do it. It
looks like a cheap plastic version of an operating system and I pity anyone
that is forced to use it.

NT is tougher than UNIX to run multiple instances on, mainly due to the fact
that it wants to do stuff for you in the background, namely configure
environment variables. The following worked for me with the following versions
of Sybase ASE all installed and running on a single server: 11.5.1, 11.9.2,
12.5. I don't have a version of ASE 12.0 for NT. If I can persuade Sybase to
send them it to me, I might be able to get that running too. Notably, each and
every one of the databases runs as a service!!!

1. Start by installing each software release into its own area. Make sure that
it is a local disk. (See Q2.2.3.) I chose to install ASE 12.5 into C:\
Sybase12_5 and ASE 11.9.2 into C:\Sybase11_9_2 etc. When it asks you about
configuring the server, select "no" or "cancel".
2. Add a user for each installation that you are going to run. Again, I added
a user sybase12_5 for ASE 12.5 and sybase11_9_2 for ASE 11.9.2.
3. As a system account, edit the environment variables (On W2K this is
Settings->Control Panel->System->Advanced->Environment Variables...) and
remove any reference to Sybase from the system path. Make sure that you
store away what has been set. A text file on your C drive is a good idea at
this stage.
4. Similarly, remove references to Sybase from the Lib, Include and CLASSPATH
variables, storing the strings away.
5. Remove the SYBASE, DSEDIT and DSQUERY variable.
6. As I said before, I do not own 12.0, so I cannot tell you what to do about
etc. I can only assume that you need to cut them out too. If you are
installing pre-12 with only 1 of 12 or 12.5, then it is not necessary.
7. Login as each new Sybase user in turn and add to each of these a set of
local variables corresponding to path, Include, Lib and set them to be the
appropriate parts from the strings you removed from the system versions
above. So, if you installed ASE 12.5 in the method described, you will have
a whole series of variables with settings containing "C:\Sybase_12_5", add
all of these to local variables belonging to the user sybase12_5. Repeat
for each instance of ASE installed. This is a tedious process and I don't
know a way of speading it up. It may be possible to edit the registry, but
I was not happy doing that.
8. If you have made each of the Sybase users administrators, then you can
configure the software from that account, and install a new ASE server.
Remember that each one needs its own port. 11.5.1 and 11.9.2 did not give
me an option to change the port during the install, so I had to do that
afterwards by editing the SQL.INI for each server in its own installation
9. If you are not able to make each user and administrator, you will need to
work with an admin to configure the software. (ASE requires administrative
rights in order to be able to add the service entries.) You will need to
log in as this admin account, set the path to the appropriate value for
each installation, install the software and then set the path to the new
values, install the next ASE etc. On NT for sure you will have to log out
and log in after changing the path variable. 2000 may be less brain dead.
Just be thankful you are not having to reboot!
10. Log back in as your tame administrator account and go into the control
panel. You need to start the "Services" applet. This is either there if you
are running NT or you have to go into "Administrative Tools" for 2000.
Scroll down and select the first of the services, which should be of the

"Sybase SQLServer _MYSERVER".

Right click and select "Properties" (I think this is how it was for NT, but
you want that services properties, however you get there.) In 2000 there is
a "Log On" tab. NT has a button (I think) that serves the same purpose.
Whether tab or button, click on it. You should have a panel that starts, at
the top, with "Log on as" and a a pair of radio options. The top one will
probably be selected, "Local System account". Choose the other and enter
the details for the sybase account associated with this server. So if the
server is ASE 12.5 enter "sybase12_5" for "This account" and enter the
password associated with this account in the next two boxes. Select enough
"OK"s to take you out of the service properties editor.
11. None of the installations made a good job of the services part. All of them
added services for all of the standard servers (data, backup, monitor and
XP), even though I had not configured any but XP server. (The NT
installation is of a different form to the UNIX/Linux versions.) The 12.5
XP configuration was OK, but the pre-12 ones were not. You will have to go
in and manually set the user to connect as (as described earlier). If you
do not do this, the services will not start properly.
12. You should then be able to start any or all of the services by pressing the
"play" button.
13. Finally, you need to re-edit the local copies of the path, Include and Lib
variables for your tame admin account if you use that account to connect to

It worked for me, as I said. I was able to run all 3 services simultaneously
and connect from the local and external machines. There is no trick as neat as
the symbolic link on Unix. Links under NT work differently.

Return to top


1.3.10: How do I capture a process's SQL?


This is a bit of a wide question, and there are many answers to it. Primarily,
it depends on why you are trying to capture it. If you are trying to debug a
troublesome stored procedure that is behaving differently in production to how
it did in testing, then you might look at the DBCC method. Alternatively, if
you wanted to do some longer term profiling, then auditing or one of the third
party tools might be the way forward. If you know of methods that are not
included here, please let me know.


If you want to look at the SQL a particular process is running at the moment,
one of the following should work. Not sure which versions of ASE these work
with. Remember to issue dbcc traceon(3604) before running any of the dbcc's so
that you can see the output at your terminal.

* dbcc sqltext(spid)
* dbcc pss(0, spid, 0)

The first of the commands issues the SQL of the spid only a bit like this:

[27] BISCAY.master.1> dbcc sqltext(9)
[27] BISCAY.master.2> go
SQL Text: select spid, status, suser_name(suid), hostname,
db_name(dbid), cmd, cpu, physical_io, memusage,
convert(char(5),blocked) from master..sysprocesses
DBCC execution completed. If DBCC printed error messages, contact a user with
System Administrator (SA) role.
[28] BISCAY.master.1>

The second issues an awful lot of other stuff before printing the text at the
bottom. Mercifully, this means that you don't have to scroll up to search for
the SQL text, which is in much the same format as with dbcc sqltext.

There are a number of third party tools that will execute these commands from a
list of processes. One of the problems is that you do have to be 'sa' or have
'sa_role' in order to run them.

Certainly the first, and possibly both, have one major drawback, and that is
that they are limited to displaying about 400 bytes worth of text, which can be
a bit annoying. However, if what you are trying to do is catch a piece of rogue
SQL that is causing a table scan or some other dastardly trick, a unique
comment in the early part of the query will lead to its easy identification.

Monitor Server

Since ASE 11.5, monitor server has had the capability for capturing a processes
SQL. See Q1.6.2 for how to configure a Monitor Server Client. When you are
done, you can get see the SQL text from a process using the "Process Current
SQL Statement" monitor. The output looks like this.


The second way of wanting to do this is for a number of processes for a period
of time. There are several methods of doing this. Probably the most popular is
to use auditing, and it is almost certainly the most popular because it
requires no additional software purchases.

Auditing is a very powerful tool that can collect information on just about
everything that happens on the server. It can be configured to capture
'cmdtext' for any or all users on a system. The data will be loaded into the
sysaudits database for later perusal. The SQL captured is not limited to a
number of bytes, like the previous examples, but if it is more than 255 bytes
long, then it will span several audit records, which must be put back together
to see the whole picture. To be honest, I am not sure what happens now that
varchars can be greater than 255 bytes in length. Personal experience with
auditing leaves to think that the load on the server is up to about 3%,
depending on the number of engines you have (the more engines, the more of a
load auditing is) and, obviously, the number of processes you wish to monitor.
I calculated 3% based on auditing all of 400 users, each of which had 2
connections to the server, on a server with 7 engines.


Another option for capturing the SQL text is to use the free Ribo utility that
is provided with as part of ASE these days. This is a small server written in
Java as an example of what can be done using jConnect. This utility is nice in
that it does not place any load on the ASE server. However, it probably has an
effect on the client that is using it. This utility's other draw back is that
each client that you wish to monitor via Ribo must be directly configured to
use it. It is not possibly mid-session to just magically turn it on.

The way it works is to act as an intermediary between the ASE server and the
client wishing to connect. All is SQL is passed through and executed exactly as
if the client was directly connected, and the results passed back. What the
Ribo server does is enable you to save the inbound SQL to a file.

3rd Party Tools

Again, there are a number of third party tools that do this job as well,
OpenSwitch being one of them. There are also a number of third party tools that
do a better job than this. They do not have any impact on the client or the
server. They work by sniffing the network for relevant packets and then put
them pack together. In actuality, they do a lot more than just generate the
SQL, but they are capable of that.

Return to top


General Troubleshooting User Database Administration ASE FAQ

David Owen

Apr 20, 2004, 9:45:11 AM4/20/04
Archive-name: databases/sybase-faq/part12

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

SQL Fundamentals

6.1.1 Are there alternatives to row at a time processing?
6.1.2 When should I execute an sp_recompile?
6.1.3 What are the different types of locks and what do they mean?
6.1.4 What's the purpose of using holdlock?
6.1.5 What's the difference between an update in place versus a deferred
update? - see Q1.5.9
6.1.6 How do I find the oldest open transaction?
6.1.7 How do I check if log truncation is blocked?
6.1.8 The timestamp datatype
6.1.9 Stored Procedure Recompilation and Reresolution
6.1.10 How do I manipulate binary columns?
6.1.11 How do I remove duplicate rows from a table?

SQL Advanced bcp ASE FAQ


6.1.1: Alternative to row at a time processing


Someone asked how they could speed up their processing. They were batch
updating/inserting gobs of information. Their algorithm was something as

... In another case I do:

If exists (select record) then
update record
insert record

I'm not sure which way is faster or if it makes a difference. I am doing
this for as many as 4000 records at a time (calling a stored procedure 4000
times!). I am interesting in knowing any way to improve this. The parameter
translation alone on the procedure calls takes 40 seconds for 4000 records.
I am using exec in DB-Lib.

Would RPC or CT-Lib be better/faster?

A netter responded stating that it was faster to ditch their algorithm and to
apply a set based strategy:

The way to take your approach is to convert the row at a time processing
(which is more traditional type of thinking) into a batch at a time (which
is more relational type of thinking). Now I'm not trying to insult you to
say that you suck or anything like that, we just need to dial you in to
think in relational terms.

The idea is to do batches (or bundles) of rows rather than processing a
single one at a time.

So let's take your example (since you didn't give exact values [probably
out of kindness to save my eyeballs] I'll use your generic example to
extend what I'm talking about):


if exists (select record) then
update record
insert record

New way:
1. Load all your rows into a table named new_stuff in a separate work
database (call it work_db) and load it using bcp -- no third GL needed.
1. truncate new_stuff and drop all indexes
2. sort your data using UNIX sort and sort it by the clustered columns
3. load it using bcp
4. create clustered index using with sorted_data and any ancillary
non-clustered index.
2. Assuming that your target table is called old_stuff
3. Do the update in a single batch:
begin tran

/* delete any rows in old_stuff which would normally
** would have been updated... we'll insert 'em instead!
** Essentially, treat the update as a delete/insert.

delete old_stuff
from old_stuff,
where old_stuff.key = new_stuff.key

/* insert entire new table: this adds any rows
** that would have been updated before and
** inserts the new rows
insert old_stuff
select * from new_stuff

commit tran

You can do all this without writing 3-GL, using bcp and a shell script.

A word of caution:

Since these inserts/updates are batched orientated you may blow your
log if you attempt to do too many at a time. In order to avoid this use
the set rowcount directive to create bite-size chunks.

Back to top


6.1.2: When should I execute an sp_recompile?


An sp_recompile should be issued any time a new index is added or an update
statistics. Dropping an index will cause an automatic recompile of all objects
that are dependent on the table.

The sp_recompile command simply increments the schemacnt counter for the given
table. All dependent object counter's are checked against this counter and if
they are different the SQL Server recompiles the object.

Back to top


6.1.3: What are the different types of (All Page) locks?


First off, just to get it out of the way, Sybase does now support row level
locking! (See Q6.1.11 for a description of the new features.) OK, that said and
sone, if you think you need row level locking, you probably aren't thinking set
based -- see Q6.1.1 for set processing.

The SQL Server uses locking in order to ensure that sanity of your queries.
Without locking there is no way to ensure the integrity of your operation.
Imagine a transaction that debited one account and credited another. If the
transaction didn't lock out readers/writers then someone can potentially see
erroneous data.

Essentially, the SQL Server attempts to use the least intrusive lock possible,
page lock, to satisfy a request. If it reaches around 200 page locks, then it
escalates the lock to a table lock and releases all page locks thus performing
the task more efficiently.

There are three types of locks:

* page locks
* table locks
* demand locks

Page Locks

There are three types of page locks:

* shared
* exclusive
* update


These locks are requested and used by readers of information. More than one
connection can hold a shared lock on a data page.

This allows for multiple readers.


The SQL Server uses exclusive locks when data is to be modified. Only one
connection may have an exclusive lock on a given data page. If a table is large
enough and the data is spread sufficiently, more than one connection may update
different data pages of a given table simultaneously.


A update lock is placed during a delete or an update while the SQL Server is
hunting for the pages to be altered. While an update lock is in place, there
can be shared locks thus allowing for higher throughput.

The update lock(s) are promoted to exclusive locks once the SQL Server is ready
to perform the delete/update.

Table Locks

There are three types of table locks:

* intent
* shared
* exclusive


Intent locks indicate the intention to acquire a shared or exclusive lock on a
data page. Intent locks are used to prevent other transactions from acquiring
shared or exclusive locks on the given page.


This is similar to a page level shared lock but it affects the entire table.
This lock is typically applied during the creation of a non-clustered index.


This is similar to a page level exclusive lock but it affects the entire table.
If an update or delete affects the entire table, an exclusive table lock is
generated. Also, during the creation of a clustered index an exclusive lock is

Demand Locks

A demand lock prevents further shared locks from being set. The SQL Server sets
a demand lock to indicate that a transaction is next to lock a table or a page.

This avoids indefinite postponement if there was a flurry of readers when a
writer wished to make a change.

Back to top


6.1.4: What's the purpose of using holdlock?


All select/readtext statements acquire shared locks (see Q6.1.3) to retrieve
their information. After the information is retrieved, the shared lock(s) is/
are released.

The holdlock option is used within transactions so that after the select/
readtext statement the locks are held until the end of the transaction:

* commit transaction
* rollback transaction

If the holdlock is not used within a transaction, the shared locks are


Assume we have the following two transactions and that each where-clause
qualifies a single row:

tx #1

begin transaction
/* acquire a shared lock and hold it until we commit */
1: select col_1 from table_a holdlock where id=1
2: update table_b set col_3 = 'fiz' where id=12
commit transaction

tx #2

begin transaction
1: update table_a set col_2 = 'a' where id=1
2: update table_c set col_3 = 'teo' where id=45
commit transaction

If tx#1, line 1 executes prior to tx#2, line 1, tx#2 waits to acquire its
exclusive lock until tx#1 releases the shared level lock on the object. This
will not be done until the commit transaction, thus slowing user throughput.

On the other hand, if tx#1 had not used the holdlock attribute, tx#2 would not
have had to wait until tx#1 committed its transaction. This is because shared
level locks are released immediately (even within transactions) when the
holdlock attribute is not used.

Note that the holdlock attribute does not stop another transaction from
acquiring a shared level lock on the object (i.e. another reader). It only
stops an exclusive level lock (i.e. a writer) from being acquired.

Back to top


6.1.6: How do I find the oldest open transaction?

select h.spid,, p.cmd,, h.starttime,
p.hostname, p.hostprocess, p.program_name
from master..syslogshold h,
master..sysprocesses p,
master..sysusers u
where h.spid = p.spid
and p.suid = u.suid
and h.spid != 0 /* not replication truncation point */

Back to top


6.1.7: How do I check if log truncation is blocked?


System 11 and beyond:

select h.spid, convert(varchar(20),, h.starttime
from master..syslogshold h,
sysindexes i
where h.dbid = db_id()
and h.spid != 0
and = 8 /* syslogs */
and in (i.first, i.first+1) /* first page of log = page of oldest xact */

Back to top


6.1.8: The timestamp datatype


The timestamp datatype is user-defined datatype supplied by Sybase, defined as:

varbinary(8) NULL

It has a special use when used to define a table column. A table may have at
most one column of type timestamp, and whenever a row containing a timestamp
column is inserted or updated the value in the timestamp column is
automatically updated. This much is covered in the documentation.

What isn't covered is what the values placed in timestamp columns actually
represent. It is a common misconception that timestamp values bear some
relation to calendar date and/or clock time. They don't - the datatype is
badly-named. SQL Server keeps a counter that is incremented for every write
operation - you can see its current value via the global variable @@DBTS
(though don't try and use this value to predict what will get inserted into a
timestamp column as every connection shares the same counter.)

The value is maintained between server startups and increases monotonically
over time (though again you cannot rely on it this behaviour). Eventually the
value will wrap, potentially causing huge problems, though you will be warned
before it does - see Sybase Technical News Volume 5, Number 1 (see Q10.3.1).
You cannot convert this value to a datetime value - it is simply an 8-byte

Note that the global timestamp value is used for recovery purposes in the
event of an RDMBS crash. As transactions are committed to the log each
transaction gets a unique timestamp value. The checkpoint process places a
marker in the log with its unique timestamp value. If the RDBMS crashes,
recovery is the process of looking for transactions that need to be rolled
forward and/or backward from the checkpoint event. If a transaction spans
across the checkpoint event and it never competed it too needs to be rolled

Essentially, this describes the write-ahead log protocol described by C.J.
Date in An Introduction to Database Systems.

So what is it for? It was created in order to support the browse-mode functions
of DB-Library (and for recovery as mentioned above). This enables an
application to easily support optimistic locking (See Q1.5.4) by guaranteeing a
watch column in a row will change value if any other column in that row is
updated. The browse functions checked that the timestamp value was still the
same as when the column was read before attempting an update. This behaviour is
easy to replicate without necessarily using the actual client browse-mode
functions - just read the timestamp value along with other data retrieved to
the client, and compare the stored value with the current value prior to an

Back to top


6.1.9: Stored Procedure Recompilation and Reresolution


When a stored procedure is created, the text is placed in syscomments and a
parse tree is placed in sysprocedures. At this stage there is no compiled query

A compiled query plan for the procedure only ever exists in memory (that is, in
the procedure cache) and is created under the following conditions:

1. A procedure is executed for the first time.
2. A procedure is executed by a second or subsequent user when the first plan
in cache is still in use.
3. The procedure cache is flushed by server restart or cache LRU flush
4. The procedure is executed or created using the with recompile option.

If the objects the procedure refers to change in some way - indexes dropped,
table definition changed, etc - the procedure will be reresolved - which
updates sysprocedures with a modified tree. Before 10.x the tree grows and in
extreme cases the procedure can become too big to execute. This problem
disappears in Sybase System 11. This reresolution will always occur if the
stored procedure uses temporary tables (tables that start with "#").

There is apparently no way of telling if a procedure has been reresolved.

Traceflag 299 offers some relief, see Q1.3.3 for more information regarding

The Official Explanation -- Reresolution and Recompilation Explained

When stored procedures are created, an entry is made in sysprocedures that
contains the query tree for that procedure. This query tree is the resolution
of the procedure and the applicable objects referenced by it. The syscomments
table will contain the actual procedure text. No query plan is kept on disk.
Upon first execution, the query tree is used to create (compile) a query plan
(execution plan) which is stored in the procedure cache, a server memory
structure. Additional query plans will be created in cache upon subsequent
executions of the procedure whenever all existing cached plans are in use. If a
cached plan is available, it will be used.

Recompilation is the process of using the existing query tree from
sysprocedures to create (compile) a new plan in cache. Recompilation can be
triggered by any one of the following:

* First execution of a stored procedure,
* Subsequent executions of the procedure when all existing cached query plans
are in use,
* If the procedure is created with the recompile option, CREATE PROCEDURE
* If execution is performed with the recompile option, EXECUTE sproc WITH

Re-resolution is the process of updating the query tree in sysprocedures AND
recompiling the query plan in cache. Re-resolution only updates the query tree
by adding the new tree onto the existing sysprocedures entry. This process
causes the procedure to grow in size which will eventually cause an execution
error (Msg 703 - Memory request failed because more than 64 pages are required
to run the query in its present form. The query should be broken up into
shorter queries if possible). Execution of a procedure that has been flagged
for re-resolution will cause the re-resolution to occur. To reduce the size of
a procedure, it must be dropped which will remove the entries from
sysprocedures and syscomments. Then recreate the procedure.

Re-resolution can be triggered by various activities most of which are
controlled by SQL Server, not the procedure owner. One option is available for
the procedure owner to force re-resolution. The system procedure, sp_recompile,
updates the schema count in sysobjects for the table referenced. A DBA usually
will execute this procedure after creating new distribution pages by use of
update statistics. The next execution of procedures that reference the table
flagged by sp_recompile will have a new query tree and query plan created.
Automatic re-resolution is done by SQL Server in the following scenarios:

* Following a LOAD DATABASE on the database containing the procedure,
* After a table used by the procedure is dropped and recreated,
* Following a LOAD DATABASE of a database where a referenced table resides,
* After a database containing a referenced table is dropped and recreated,
* Whenever a rule or default is bound or unbound to a referenced table.

Forcing automatic compression of procedures in System 10 is done with trace
flag 241. System 11 should be doing automatic compression, though this is not

When are stored procedures compiled?

Stored procedures are in a database as rows in sysprocedures, in the form of
parse trees. They are later compiled into execution plans.

A stored procedures is compiled:

1. with the first EXECute, when the parse tree is read into cache
2. with every EXECute, if CREATE PROCEDURE included WITH RECOMPILE
3. with each EXECute specifying WITH RECOMPILE
4. if the plans in cache for the procedure are all in use by other processes
5. after a LOAD DATABASE, when all procedures in the database are recompiled
6. if a table referenced by the procedure can not be opened (using object id),
when recompilation is done using the table's name
7. after a schema change in any referenced table, including:
1. CREATE INDEX or DROP INDEX to add/delete an index
2. ALTER TABLE to add a new column
3. sp_bindefault or sp_unbindefault to add/delete a default
4. sp_bindrule or sp_unbindrule to add/delete a rule
8. after EXECute sp_recompile on a referenced table, which increments
sysobjects.schema and thus forces re-compilation

What causes re-resolution of a stored procedure?

When a stored procedure references an object that is modified after the
creation of the stored procedure, the stored procedure must be re-resolved.
Re-resolution is the process of verifying the location of referenced objects,
including the object id number. Re-resolution will occur under the following

1. One of the tables used by the stored procedure is dropped and re-created.
2. A rule or default is bound to one of the tables (or unbound).
3. The user runs sp_recompile on one of the tables.
4. The database the stored procedure belongs to is re-loaded.
5. The database that one of the stored procedure's tables is located in is
6. The database that one of the stored procedure's tables is located in is
dropped and re-created.

What will cause the size of a stored procedure to grow?

Any of the following will result in a stored procedure to grow when it is

1. One of the tables used in the procedure is dropped and re-created.
2. A new rule or default is bound to one of the tables or the user runs
sp_recompile on one of the tables.
3. The database containing the stored procedure is re-loaded.

Other things causing a stored procedure to be re-compiled will not cause it to
grow. For example, dropping an index on one of the tables used in the procedure

The difference is between simple recompilation and re-resolution. Re-resolution
happens when one of the tables changes in such a way that the query trees
stored in sysprocedures may be invalid. The datatypes, column offsets, object
ids or other parts of the tree may change. In this case, the server must
re-allocate some of the query tree nodes. The old nodes are not de-allocated
(there is no way to do this within a single procedure header), so the procedure
grows. In time, trying to execute the stored procedure will result in a 703
error about exceeding the 64 page limit for a query.

Back to top


6.1.10: How do I manipulate varbinary columns?


The question was posed - How do we manipulate varbinary columns, given that
some portion - like the 5th and 6th bit of the 3rd byte - of a (var)binary
column, needs to be updated? Here is one approach, provided by Bret Halford (, using stored procedures to set or clear certain bits of a
certain byte of a field of a row with a given id:

drop table demo_table
drop procedure clear_bits
drop procedure set_bits
create table demo_table (id numeric(18,0) identity, binary_col
insert demo_table values (0xffffffffffffffffffffffffffffffffffffffff)
insert demo_table values (0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa)
insert demo_table values (0x0000000000000000000000000000000000000000)

create procedure clear_bits (
@id numeric(18,0), -- primary key of row to be changed
@bytenum tinyint, -- specifies which byte of binary_col to change
@mask binary(1) -- bits to be cleared are zeroed,
-- bits left alone are turned on
-- so 0xff = clear all, 0xfb = clear bit 3
update demo_table set binary_col =
convert(tinyint,substring(binary_col,@bytenum,1)) &
from demo_table
where id = @id

create procedure set_bits (
@id numeric(18,0), -- primary key of row to be changed
@bytenum tinyint, -- specifies which byte of binary_col to change
@mask binary(1)) -- bits to be set are turned on
-- bits left alone are zeroed
-- so 0xff = set all, 0xfb = set all but 3
update demo_table set binary_col =
convert(tinyint,substring(binary_col,@bytenum, 1)) |
from demo_table
where id = @id

select * from demo_table
-- clear bits 2,4,6,8 of byte 1 of row 1
exec clear_bits 1,1,0xAA

-- set bits 1-8 of byte 20 of row 3
exec set_bits 3,20,0xff

-- clear bits 1-8 of byte 4 of row 2
exec clear_bits 2,4,0xff

-- clear bit 3 of byte 5 of row 2
exec clear_bits 2,5,0x08
exec clear_bits 2,6,0x0f
exec set_bits 2,10,0xff

select * from demo_table

Back to top


6.1.11: How do I remove duplicate rows from a table?


There are a number of different ways to achieve this, depending on what you are
trying to achieve. Usually, you are trying to remove duplication of a certain
key due to changes in business rules or recognition of a business rule that was
not applied when the database was originally built.

Probably the quickest method is to build a copy of the original table:

select *
into temp_table
from base_table
where 1=0

Create a unique index on the columns that covers the duplicating rows with the
ignore_dup_key attribute. This may be more columns that the key for the table.

create unique index temp_idx
on temp_table(col1, col2, ..., colN)
with ignore_dup_key

Now, insert base_table into temp_table.

insert temp_table
select * from base_table

You probably want to ensure you have a very good backup of the base_table at
this point, coz your going to clear it out! You will also want to check to
ensure that the temp_table includes the rows you need. You also need to ensure
that there are no triggers on the base table (remember to keep a copy!) or RI
constraints. You probably do not want any of these to fire, or if they do, you
are aware of the implications.

Now you have a couple of choices. You can simply drop the original table and
rename the temp table to the same name as the base table. Alternatively,
truncate the table and insert from the temp_table into the original table. You
would need to do this last if you did need the RI to fire on the table etc. I
suspect that in most cases dropping and renaming will be the best option.

If you want to simply see the duplicates in a table, the following query will

select key1, key2, ...
from base_table
group by key1, key2, key3, key4, ...
having count(*) > 1

Sybase will actually allow a "select *", but it is not guaranteed to work.

Back to top


SQL Advanced bcp ASE FAQ

David Owen

Apr 20, 2004, 9:45:10 AM4/20/04
Archive-name: databases/sybase-faq/part11

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Platform Specific Issues - Solaris

2.1.1 Should I run 32 or 64 bit ASE with Solaris?
2.1.2 What is Intimate Shared Memory or ISM?

Platform Specific Issues - NT Performance and Tuning ASE FAQ


2.1.1: Should I run 32 or 64 bit ASE with Solaris?


Sybase' first forray into 64-bit was with release 11.9.3. I do not know much
about that release, but I seem to remember that it was always lagging behind
its sister release of 11.9.2.

With ASE 12, Sybase have both 32-bit and 64-bit versions at the same release
level. This is a big improvement, since it cuts out some concern that was
prevelant with 11.9.3 as to why they were on different numbers. The releases
are supposed to be identical in terms of functionality, save the fact that the
64-bit version can address more memory.

So, why not just be done with it and have just the one version? Firstly, I
suppose that not everyone who can run Solaris has the capability to run the
64-bit version. There are still a lot of 32-bit Sparc chips around and a lot of
people use them. It is also possible to run 32-bit Solaris on a 64-bit machine.
In order to be able to run 64-bit Sybase you will have to be running 64-bit

If you have a 64-bit environment, you still need to choose between which Sybase
version to run. If you have more than 4G bytes of memory on your machine and
you would like Sybase to take advantage of it, then the 64-bit version is for
you. If not, then the word on the street, and from Sybase themselves, is that
in identical environments, the 32-bit version runs slightly faster. I have
heard a couple of explanations as to why this is so, but nothing that I find
100% convincing.

Back to top


2.1.2: What is Intimate Shared Memory or ISM?


Intimate Shared Memory or ISM is a specific feature of Sun Solaris. The feature
was developed so that when multiple processes (at OS level) try to access a
shared memory region, they do not use multiple TLBs (Transalation Lookaside
Buffers) at OS kernel level. This saves lot of kernel memory space.

I don't think that does a whole lot for Sybase, more for Oracle I suppose.
However, there is a side effect that is useful. If there is engough memory
available on the machine, typically Solaris will not swap out process memory
marked as ISM if it can possibly help it.

Swapping in Solaris is done in three phases, reserved, allocated and used.
Locking the shared memory has the advantage of increasing performance. Of
course, if there are lot's of processes on the machine and if new processes
starve for memory, there is a potential that ISM will get swapped.

For performance reasons, it is worth ensuring that Sybase can allocated its
shared memory segment using ISM. ASE tries by default to use ISM and will
display an error message during start up if this is not possible. It is
probably worth starting Sybase soon after a machine is rebooted to give it the
best possible chance of using ISM.

More details can be found on the Sunsolve web site. I don't have a URL, sorry.
I am not even sure if this is a public site or not.

Back to top


Platform Specific Issues - NT Performance and Tuning ASE FAQ

Platform Specific Issues - NT/2000

2.2.1 How to Start ASE on Remote NT Servers
2.2.2 How to Configure More than 2G bytes of Memory for ASE on NT
2.2.3 Installation Issues

Platform Specific Issues - Linux Platform Specific Issues - Solaris ASE FAQ


2.2.1: How to Start ASE on Remote NT Servers


Currently, there is no method of starting ASE on a remote NT server using
Sybase Central. So how do you get ASE running on an NT server located in one
city when you are currently located in another. OK, OK, so flying there is an
option, but let's try to stay within the realms of practicality <g>.

One option is to buy a good telnet server and telnet onto the box and then
start it using the "RUN_<server>.BAT" file. This works, but depending on the
telnet server can be a little troublesome. NT does not have such a nice set of
commands as Unix, so there is no "startserver" to run the server in the
background. This means that the telnet window that you use to start the server
may have to stay open for the lifetime of the server. This means that the
health of ASE is now dependent upon two machines not crashing. As I say, your
mileage may vary, but I have certainly found this to be the case with at least
one telnet server.

Another option is to use SRVMGR.EXE from the Windows NT resource kit. Roughly
you issue


(obviously replacing SERVER-TO-BE-MANAGED with the name of the server you wish
to start ASE on!)

Select the "Services" option, and start ASE as if you were in the "Services"
applet on a local NT server.

Yet another option is to install PC Anywhere or VNC on both machines and use
one of these tools to remotely control the system. (VNC is a very good version
of PC Anywhere, except that the clients and servers run on NT, Unix, Linux; the
source code is available and it is free (in both senses of the word)!)

If anyone knows of any better methods, please let me know and I will add them
to this section. Thanks.

Back to top


2.2.2: How to Configure More than 2G bytes of Memory for ASE on NT.


The following was posted on news:// ,
taken directly from a Sybase SPS case notes.

(I read recently that this is not needed, that Sybase does all of this for you
before it leaves the factory. If anyone knows the real answer, I would be
grateful for an update.)

If you are using NT server enterprise, or Windows 2000 Advanced Server, you may
be able to get up to 3gig:

Here is what you need to do in order to configure greater than 2GB memory for
ASE on NT:

Step 1: Make a backup copy of sqlsrvr.exe in the sybase bin directory

Step 2: Verify the current settings of sqlsrvr.exe using imagecfg.exe:

imagecfg sqlsrvr.exe
sqlsrvr.exe contains the following configuration information:
Subsystem Version of 4.0
Stack Reserve Size: 0x20000
Stack Commit Size: 0x4

Step 3: Use imagecfg to switch on large addressing using the -l (lowercase L)

imagecfg -l sqlsrvr.exe
sqlsrvr.exe contains the following configuration information:
Subsystem Version of 4.0
Stack Reserve Size: 0x20000
Stack Commit Size: 0x4

sqlsrvr.exe updated with the following configuration information:

Subsystem Version of 4.0
Image can handle large (>2GB) addresses
Stack Reserve Size: 0x20000
Stack Commit Size: 0x4

Step 4: verify ASE is able to start

Step 5: The NT machine must be booted with the /3GB flag and must have
sufficient paging file space (e.g., if you want ASE to access 3G of memory then
the paging file must be at least that size)

Step 6: increase total memory to say 2.2 gb (anything > 2gb)

Step 7: increase starting virtual memory address to 23662592 decimal (which is
1691000 hex) as shown:

sp_configure 'shared memory starting address', 23662592

Step 8: restart server

Step 9: test to connect a lot of users (more than 240)

Back to top


2.2.3: Installation issues.


This is a list of items to be aware of when installing ASE onto NT/2000.

* Make sure that you install onto a local drive. This might not affect all
versions of ASE on NT/2000, but I could not get the software to install and
run from a network drive with the 12.5 developer edition. Try as I might,
it kept failing without really telling me why. I aborted the installation,
installed onto one of the local drives, and it worked a charm. My only NT/
2000 machine is my laptop with only one drive, so I do not know if this is
any drive other than "C" or whether it is just network mounted drives. Will
be happy to take advice and corrections from Sybase or anyone that can tell
me what I was doing wrong.

Back to top


Platform Specific Issues - Linux Platform Specific Issues - Solaris ASE FAQ

Platform Specific Issues - Linux

2.3.1 ASE on Linux FAQ

DBCCs Platform Specific Issues - NT ASE FAQ


2.3.1: ASE on Linux FAQ


There is an FAQ covering ASE on Linux at Michael Peppler's site.

It contains a fair bit of information about running Sybase ASE on Linux and if
you are interested in doing just that, then go read it. It certainly will
answer your question about why, after a new install, you can connect from the
server that ASE is installed on but no other client. (I am not going to tell
you here, you will have to go and read it :-)

Back to top


DBCCs Platform Specific Issues - NT ASE FAQ


3.1 How do I set TS Role in order to run certain DBCCs...?
3.2 What are some of the hidden/trick DBCC commands?
3.3 Other sites with DBCC information.
3.4 Fixing a Munged Log

Performing any of the above may corrupt your ASE installation. Please do
not call Sybase Technical Support after screwing up ASE. Remember, always
take a dump of the master database and any other databases that are to be

isql Platform Specific Issues - Linux ASE FAQ Index


3.1: How to set TS Role


Some DBCC commands require that you set TS Role in order to run them. Here's
how to set it:

Login to Server as sa and perform the following:

sp_role "grant", sybase_ts_role, sa


set role "sybase_ts_role" on

Back to top


3.2: DBCC Command Reference


Here is the list of DBCC commands that have been sent into the FAQ. If you
know of any more or have more information, then please send it in to, this is, after all, a resource for us all.

As ASE develops, so some of the dbcc's change. I have pointed out major
changes from one release to another that I know about. However, a couple of
changes are so common that it will save a lot of space if I say it once. Where
there is an option to specify dbid or dbname, in previous releases only dbid
would be accepted.

| | | |Risk Level|
| DBCC Name | Argument List | Comments | / |
| | | |Supported?|
|allocdump |( dbid | dbname, page ) | | |
| |( { print_bufs | no_print }, bucket_limit ) |Format prior to ASE 11. | |
| |-----------------------------------------------------+--------------------------+----------|
|bhash | |Format prior to ASE 12. | |
| |-----------------------------------------------------+--------------------------+----------|
| |( cname [, clet_id [, { print_bufs | no_print |Format ASE 12 and later. | |
| |},bucket_limit]] ) | | |
| |( [ dbid ][, objid ][, nbufs ], printopt = {0 | 1 | |Format prior to ASE 11. | |
| |2},buftype) | | |
| |-----------------------------------------------------+--------------------------+----------|
| |[ (dbid | dbname [, objid | objname [, nbufs [, | | |
| |printopt = { 0 | 1 | 2 } |Format prior to ASE 12. | |
| |[, buftype = { kept | hashed | nothashed | ioerr} [, | | |
|buffer |cachename ] ] ] ] ] ) ] | | |
| |-----------------------------------------------------+--------------------------+----------|
| |[ (dbid | dbname [, objid | objname [, nbufs [, | | |
| |printopt = { 0 | 1 | 2 } | | |
| |[, buftype = { kept | hashed | nothashed | ioerr} [, |Format ASE 12 and later. | |
| |cachename [, cachelet_id ] | | |
| |] ] ] ] ] ) ] | | |
| |( startaddress, length ) |Format prior to ASE 12. | |
|bytes |-----------------------------------------------------+--------------------------+----------|
| |(startaddress, length [, showlist | STRUCT_NAME]) |Format ASE 12 and later. | |
| | |Uninstall and Uncache | |
|cacheremove |(dbid|dbname, objid|objname) |descriptor for an object | |
| | |from cache | |
|checkalloc |[( dbname [, fix | nofix ] ) ] | | |
|checkcatalog |[( dbname )] | | |
|checkdb |[( dbname [, skip_ncindex ] ) ] | | |
|checktable |( tablename | tabid [, skip_ncindex ] ) | | |
| | |Error can take one of the | |
| | |following values: | |
| | | | |
| | | * 1133 error | |
| | | demonstrates that a | |
| | | page we think is an | |
| | | oam is not | |
| | | * 2502 error shows | |
| | | multiple references to| |
| | | the same page | |
| | | * 2503 error shows a | |
| | | breakage in the page | |
| | | linkage | |
| | | * 2521 error shows that | |
| | | the page is referenced| |
| | | but is not allocated | |
| | | on the extent page | |
| | | * 2523 error shows that | |
| | | the page number in the| |
| | | page or catalog | |
| | | entries are | |
| | | out-of-range for the | |
| | | database | |
| | | * 2525 error shows that | |
| | | an extent objid/indid | |
| | | do not match what is | |
| | | on the page | |
| | | * 2529 error shows a | |
|corrupt |( tablename, indid, error ) | page number | |
| | | out-of-range for the | |
| | | database or a 605 | |
| | | style scenario | |
| | | * 2540 error occurs when| |
| | | a page is allocated on| |
| | | an extent but the page| |
| | | is not referenced in | |
| | | the page chain | |
| | | * 2546 error occurs when| |
| | | an extent is found for| |
| | | an object without an | |
| | | of its pages being | |
| | | referenced (a stranded| |
| | | extent) | |
| | | * 7939 error occurs when| |
| | | an allocation page | |
| | | which has extents for | |
| | | an object are not | |
| | | reflected on the OAM | |
| | | page | |
| | | * 7940 error occurs when| |
| | | the total counts in | |
| | | the OAM page differ | |
| | | from the actual count | |
| | | of pages in the chain | |
| | | * 7949 error is similar | |
| | | to a 7940 except that | |
| | | the counts are on an | |
| | | allocation page basis | |
| | |cursor_level - level of | |
|cursorinfo |(cursor_level, cursor_name) |nesting. -1 is all nesting| |
| | |levels | |
|dbinfo |( [ dbname ] ) | | |
|dbrepair |( dbid, option = { dropdb | fixindex | fixsysindex },| | |
| |table, indexid ) | | |
|dbrepair |( dbid, ltmignore) | | |
|dbtable |( dbid ) | | |
|delete_row |( dbid, pageid, delete_by_row = { 1 | 0 }, rownum ) | | |
|des |( [ dbid ][, objid ] ) | | |
| | |eng func may be: | |
| | | | |
|engine |(eng_func) | * "online" | |
| | | * "offline", ["<engine | |
| | | number>"] | |
|extentcheck |( dbid, objid, indexid, sort = {1|0} ) | | |
|extentdump |( dbid, page ) | | |
|extentzap |( dbid, objid, indexid, sort ) | | |
|findnotfullextents|( dbid, objid, indexid, sort = { 1 | 0 } ) | | |
|fix_al |( [ dbname ] ) | | |
|help |( dbcc_command ) | | |
|ind |( dbid, objid, printopt = { 0 | 1 | 2 } ) | | |
|indexalloc |(tablename|tabid, indid, [full | optimized | fast], | | |
| |[fix | nofix]) | | |
|listoam |(dbid | dbname, tabid | tablename, indid) | | |
|locateindexpgs |( dbid, objid, page, indexid, level ) | | |
|lock | |print out lock chains | |
|log |( [dbid][,objid][,page][,row][,nrecords][,type= | | |
| |{-1..36}],printopt={0|1} ) | | |
|memusage | | | |
|netmemshow |( option = {1 | 2 | 3} ) | | |
|netmemusage | | | |
|newalloc |( dbname, option = { 1 | 2 | 3 } ) | | |
|page |( dbid, pagenum [, printopt={0|1|2} ][, cache={0|1} ]| | |
| |[, logical={1|0} ] ) | | |
|pglinkage |( dbid, start, number, printopt={0|1|2}, target, | | |
| |order={1|0} ) | | |
|pktmemshow |( option = {spid} ) | | |
|procbuf |( dbid, objid, nbufs, printopt = { 0 | 1 } ) | | |
|prtipage |( dbid, objid, indexid, indexpage ) | | |
|pss |( suid, spid, printopt = { 1 | 0 } ) | | |
|rebuildextents |( dbid, objid, indexid ) | | |
| | |careful as this will cause| |
|rebuild_log |( dbid, 1, 1) |large jumps in your | |
| | |timestamp values used by | |
| | |log recovery. | |
|remap | |Only available prior to | |
| | |12. | |
|resource | | | |
|setkeepalive |(# minutes) |for use on Novell with TCP| |
| | |/IP. | |
| | |Not needed with more | |
| | |recent versions of ASE, | |
| | |use the supplied stored | |
| | |procs. On older versions | |
|settrunc |('ltm','ignore') |of ASE (pre-11?) this | |
| | |command may be useful for | |
| | |a dba who is dumping and | |
| | |loading a database that | |
| | |has replication set on for| |
| | |the original db. | |
| | |Shows the sql that the | |
|sqltext |(spid) |spid is currently | |
| | |running. Blank if idle. | |
|stacktrace |(spid) |Not Linux, yet :-) | |
|show_bucket |( dbid, pageid, lookup_type ) | | |
|tab |( dbid, objid, printopt = { 0 | 1 | 2 } ) | | |
|tablealloc |(tablename|tabid, [full | optimized | fast],[fix | | | |
| |nofix]) | | |
|traceoff |( tracenum [, tracenum ... ] ) | | |
|traceon |( tracenum [, tracenum ... ] ) | | |
| | |Used to switch on/off | |
| | |certain options. Some are| |
| | |supported and listed in | |
| | |the docs, others | |
| | |correspond to the | |
| | |buildmaster -yall name | |
| | |minus the c prefix. | |
| | | | |
| | |Supported: | |
| | | | |
| | | * ascinserts ('value' is| |
| | | again two values, 1|0 | |
| | | for on or off and the | |
| | | table name). | |
| | | * cpuaffinity | |
| | | ('value' in this case | |
|tune |( option, value ) | is two values, the | |
| | | starting cpu number | |
| | | and "on" or "off".) | |
| | | * maxwritedes | |
| | | | |
| | |Unsupported: | |
| | | | |
| | | * indextrips | |
| | | * oamtrips | |
| | | * datatrips | |
| | | * schedspins | |
| | | * bufwashsize | |
| | | * sortbufsize | |
| | | * sortpgcount | |
| | | * maxscheds | |
| | | * max_retries | |
| | | | |
| | | | |
|undo |( dbid, pageno, rowno ) | | |
| |( dbid|dbname, type = {0|1}, display_opts = {0|1} [, |If sp_helpdb is returning | |
|usedextents |bypiece = {0|1}]) |negative free space, try: | |
| | |usedextents(dbid, 0, 1, 1)| |

Back to top


3.3: Other Sites with DBCC information


*, Al Huntley's site contains a
comprehensive including discussion on some and example output.
*, From KaleidaTech Associates, Inc. has
another fairly complete list.
*, as you would expect, Rob Verschoor has a list of
DBCC's in his ASE Quick Reference Supplement.

Back to top


3.4: Fixing a Munged Log


Sybase Technical Support states that this is extremely dangerous as it
"jacks up the value of the timestamp" which is used for recovery purposes.
This may cause potential database corruption if the system fails while the
timestamp rolls over.

In 4.9.2, you could only run the dbcc rebuild_log command once and after
that you would have to use bcp to rebuild the database

In System 10, you can run this command about 10 times.

In System 11 I (Pablo, previous editor) tried it about 20 times and no

1> use master
2> go
1> select count(*) from your_database..syslogs
2> go

some number

1> sp_configure "allow updates",1
2> go
1> reconfigure with override /* for system 10 and below only*/
2> go

1> begin tran
2> go

/* Save the following status to be used later... */
1> select saved_status=status from sysdatabases where name = "your_database"
2> go
1> update sysdatabases set status = -32768 where name = "your_database"
2> go
1> commit tran
2> go
1> shutdown
2> go

1> dbcc rebuild_log (your_database, 0, 0)
2> go
DB-LIBRARY error (severity 9):
Unexpected EOF from SQL Server.

1> dbcc rebuild_log (your_database, 1, 1)
2> go
DBCC execution completed. If DBCC printed error messages, see your System

1> use your_database
2> go
1> select count(*) from syslogs
2> go


1> begin tran
2> go
1> update sysdatabases set status = saved_status where name = "your_database"
2> go
(1 row affected)
1> commit tran
2> go
1> shutdown
2> go

Back to top


isql Platform Specific Issues - Linux ASE FAQ Index


4.1 How do I hide my password using isql?
4.2 How do I remove row affected and/or dashes when using isql?
4.3 How do I pipe the output of one isql to another?
4.4 What alternatives to isql exist?
4.5 How can I make isql secure?



4.1: Hiding your password to isql


Here are a menagerie (I've always wanted to use that word) of different methods
to hide your password. Pick and choose whichever fits your environment best:

Single ASE on host

Script #1

Assuming that you are using bourne shell sh(1) as your scripting language you
can put the password in a file and substitute the file where the password is


# invoke say ISQL or something...
(cat $HOME/dba/password_file
cat << EOD
dbcc ...
EOD ) | $SYBASE/bin/isql -Usa -w1000

Script #2

umask 077
cat <<-endOfCat | isql -Umyuserid -Smyserver
use mydb

Script #3

umask 077
cat <<-endOfCat | isql -Umyuserid -Smyserver
`myScriptForGeneratingPasswords myServer`
use mydb

Script #3

umask 077
isql -Umyuserid -Smyserver <<-endOfIsql
use mydb

Script #4

umask 077
isql -Umyuserid -Smyserver <<-endOfIsql
`myScriptForGeneratingPasswords myServer`
use mydb

Script #5

echo 'mypassword
use mydb
go' | isql -Umyuserid -Smyserver

Script #6

echo "`myScriptForGeneratingPasswords myServer`
use mydb
go" | isql -Umyuserid -Smyserver

Script #7

echo "Password :\c "
stty -echo
stty echo

echo "$PASSWD
waitfor delay '0:1:00'
" | $SYBASE/bin/isql -Usa -S${DSQUERY}

Multiple ASEs on host

Again, assuming that you are using bourne shell as your scripting language, you
can do the following:

1. Create a global file. This file will contain passwords, generic functions,
master device for the respective DSQUERY.
2. In the actual scripts, source in the global file.

Global File


case $1 in
SERVER_1) PASSWD="this";;
SERVER_3) PASSWD="bogus;;
*) return 1;;

return 0

Generic Script

#!/bin/sh -a

# Use "-a" for auto-export of variables

# "dot" the file - equivalent to csh() "source" command
. $HOME/dba/global_file


# Determine the password: sets PASSWD
my_password $DSQUERY
if [ $? -ne 0 ] ; then # error!
echo "<do some error catching>"
exit 1

# invoke say ISQL or something...
echo "$PASSWD
dbcc ...
go" | $SYBASE/bin/isql -U sa -S $DSQUERY -w1000

Back to top


4.2: How to remove row affected and dashes


If you pipe the output of isql then you can use sed(1) to remove this
extraneous output:

echo "$PASSWD
go" | isql -U sa -S MY_SERVER | sed -e '/affected/d'
-e '/---/d'

If you simply wish to eliminate the row affected line use the set nocount on

Back to top


4.3: How do I pipe the output of one isql to another?