Sybase FAQ: 1/19 - index

Skip to first unread message

David Owen

Jan 21, 2004, 4:26:00 AM1/21/04
Archive-name: databases/sybase-faq/part1
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Sybase Frequently Asked Questions

Sybase FAQ Home PageAdaptive Server Enterprise FAQAdaptive Server Anywhere FAQ
Replication Server FAQSearch the FAQ
Sybase FAQ

Main Page

* Where can I get the latest release of this FAQ?
* What's new in this release?
* How can I help with the FAQ?
* Who do I tell about problems in the FAQ?
* Acknowledgements and Thanks
* Hall of Fame
* Copyright and Disclaimer
* General Index

Main | ASE | ASA | REP | Search


Where can I get the latest release of this FAQ?

International Sybase User Group

The main page for this site is It is hosted
there by kind permission of the International Sybase User Group (http:// as a service to the Sybase community.

To get a text version of this FAQ:


If you want uncompressed versions of the various sections, they can be got
from ASE, ASA & REP.

To get the HTML for this FAQ:


Last major update: 21st February 2003.

Back to Top


What's new in this release?

Release 1.9

* Running multiple servers on a single server (UNIX and NT).

Back to Top


What's happening with the FAQ?

I have not had a lot of time to spend on the FAQ this year. Mainly, this is
down to work, or the lack of it. I know, we are all in the same boat. Well, it
has meant that I have had a lot less free time than I used to and as a result
the FAQ has not been kept as up to date as I would like. Sadly, the work I have
been doing is with those other database vendors, but we won't name them here.
Anyway, that is the sob story over and done with. If anyone thinks that they
would like to see more effort applied, I would be happy to hand the mantle
over. Since the amount of help that I have actually seen amounts to about
practically none, then I am sure I will not be over-run with offers! I will
definitely have more time come January and plan some serious work on it then.

Back to Top


How can I help with the FAQ?

I have had offers from a couple of people to write sections, but if you feel
that you are in a position to add support for a section, or if you have some
FAQs to add, please let me know. This is a resource that we should all
support, so send me the stuff and I will include it.

Typos and specific corrections are always very useful. Less useful is the
general I don't think that section x.y.z is very understandable. Sorry to sound
harsh, but what I need is actual text that is more readable. Better still is
actual HTML that makes it stand out and sing (if necessary)!

Currently I am looking for maintainers of the following sections Replication,
Adaptive Server Anywhere, IQ server, MPP Server and Open Server. I am not sure
whether to add a section for Omni Server. I sort of feel that since Omni has
been subsumed into ASE as CIS that any FAQs should really be incorporated
there. However, if you know of some good Omni gotchas or tips, whether they
are still there in CIS or not, please send them in. I certainly plan to have a
subsection of ASE dealing with CIS even if Omni does not get its own major
section. I also think that we need sections on some of the really new stuff.
Jaguar and the new engines also deserve a spot.

Another very useful way that you can help is in getting people to update their
links. I have seen lots of links recently, some still pointing to Pablo's
original, some pointing to Tom's site but referring to it as coming from the
SGI site.

Back to Top


Who do I tell about problems in the FAQ?

The current maintainer is David Owen ( and you can send
errors in the FAQ directly to me. If you have an FAQ item (both the question
and the answer) send it to and I will include it.

Do not send email to any of the officials at ISUG, they are simply hosting the
FAQ and are not responsible for its contents.

Also, do not send email to Sybase, they are not responsible for the contents
either. See the Disclaimer.

Back to Top


Acknowledgements and Thanks

Special thanks must go to the following people for their help in getting this
FAQ to where it is today.

* Pablo Sanchez for getting the FAQ off the ground in the first place and for
many years of dedicated work in maintaining it.

* Anthony Mandic ( for a million things. Patiently answering
questions in all of the Sybase news groups, without which most beginners
would be lost. For supporting and encouraging me in getting this FAQ
together and for providing some pretty neat graphics.

* The ISUG, especially Luc Van der Veurst ( and Michael
Peppler (, for hosting this FAQ and providing support
in setting up the website.

* The members of the various news groups and mailing lists who, like Anthony,
provide unstinting support. The list is fairly long, but I think that Bret
Halford ( deserves a mention. If you go to Google News
and do a search, he submits almost as many replies as Anthony.

Back to Top


Hall of Fame

I am not sure how Pablo chose his select list, there is certainly no question
as to their inclusion. I know that there are a couple of awards that the ISUG
give out each year for the people that the ISUG members believe have
contributed most to the Sybase community that year. I think that this section
should honour those people that deserve an award each and every year. If you
know of a candidate, let me know and I will consider his or her inclusion.
Self nominations are not acceptable :-)

The following people have made it to the Sybase FAQ Hall of Fame:

* Michael Peppler ( For Sybperl and all of the other
tools of which he is author or instigator plus the ceaseless support that
he provides through countless mailing lists, newsgroups and directly via

* Scott Gray ( Father of sqsh, much more than simply a
replacement for isql. How anyone developing or administering Sybase can
survive without it, I will never know.

* Pablo Sanchez ( Pablo got the first web based FAQ off the
ground, wrote most (all?) of the first edition and then maintained it for a
number of years. He did a fantastic job, building a resource that is
worth its weight in gold.

Back to Top


Copyright and Disclaimer


You are free to copy or distribute this FAQ in whole or in part, on any medium
you choose provided that you:

* include this Copyright and Disclaimer notice;
* do NOT distribute or copy, in any fashion, with the intention of making a
profit from its use;
* give FULL attribution to the original authors.


This FAQ is provided as is without any express or implied warranties. Whilst
every endeavour has been taken to ensure the accuracy of the information
contained within the articles, the author, nor any of the contributors, assume
responsibility for errors or omissions, or for damages resulting from the use
of the information contained herein.

If you are not happy about performing any of the suggestions contained within
this FAQ, you are probably better off calling Sybase Technical Support.


This site and all its contents belongs to the Sybase FAQ (

Unless explicitly stated in an article, all material within this FAQ is
copyrighted. The primary copyright holders are David Owen and Pablo Sanchez.
However, all contributed material is, and will remain, the property of the
respective authors and contributors.

Back to Top


1.1: Basic ASE Administration

1.1.1 What is SQL Server and ASE anyway?
1.1.2 How do I start/stop ASE when the CPU reboots?
1.1.3 How do I move tempdb off of the master device?
1.1.4 How do I correct timeslice -201?
1.1.5 The how's and why's on becoming Certified.
1.1.6 RAID and Sybase
1.1.7 How to swap a db device with another
1.1.8 Server naming and renaming
1.1.9 How do I interpret the tli strings in the interface file?
1.1.10 How can I tell the datetime my Server started?
1.1.11 Raw partitions or regular files?
1.1.12 Is Sybase Y2K (Y2000) compliant?
1.1.13 How can I run the ASE upgrade manually?
1.1.14 We have lost the sa password, what can we do?
1.1.15 How do I set a password to be null?
1.1.16 Does Sybase support Row Level Locking?
1.1.17 What platforms does ASE run on?
1.1.18 How do I backup databases > 64G on ASE prior to 12.x?

1.2: User Database Administration

1.2.1 Changing varchar(m) to varchar(n)
1.2.2 Frequently asked questions on Table partitioning
1.2.3 How do I manually drop a table?
1.2.4 Why not create all my columns varchar(255)?
1.2.5 What's a good example of a transaction?
1.2.6 What's a natural key?
1.2.7 Making a Stored Procedure invisible
1.2.8 Saving space when inserting rows monotonically
1.2.9 How to compute database fragmentation
1.2.10 Tasks a DBA should do...
1.2.11 How to implement database security
1.2.12 How to shrink a database
1.2.13 How do I turn on auditing of all SQL text sent to the server
1.2.14 sp_helpdb/sp_helpsegment is returning negative numbers

1.3: Advanced ASE Administration

1.3.1 How do I clear a log suspend'd connection?
1.3.2 What's the best value for cschedspins?
1.3.3 What traceflags are available?
1.3.4 How do I use traceflags 5101 and 5102?
1.3.5 What is cmaxpktsz good for?
1.3.6 What do all the parameters of a buildmaster -d<device> -yall mean?
1.3.7 What is CIS and how do I use it?
1.3.8 If the master device is full how do I make the master database
1.3.9 How do I run multiple versions of Sybase on the same server?
1.3.10 How do I capture a process's SQL?

1.4: General Troubleshooting

1. How do I turn off marked suspect on my database?
2. On startup, the transaction log of a database has filled and recovery has
suspended, what can I do?
3. Why do my page locks not get escalated to a table lock after 200 locks?

1.5: Performance and Tuning

1.5.1 What are the nitty gritty details on Performance and Tuning?
1.5.2 What is best way to use temp tables in an OLTP environment?
1.5.3 What's the difference between clustered and non-clustered indexes?
1.5.4 Optimistic versus pessimistic locking?
1.5.5 How do I force an index to be used?
1.5.6 Why place tempdb and log on low numbered devices?
1.5.7 Have I configured enough memory for ASE?
1.5.8 Why should I use stored procedures?
1.5.9 I don't understand showplan's output, please explain.
1.5.10 Poor man's sp_sysmon.
1.5.11 View MRU-LRU procedure cache chain.
1.5.12 Improving Text/Image Type Performance

1.6: Server Monitoring

1.6.1 What is Monitor Server and how do I configure it?
1.6.2 OK, that was easy, how do I configure a client?

2.1: Platform Specific Issues - Solaris

2.1.1 Should I run 32 or 64 bit ASE with Solaris?
2.1.2 What is Intimate Shared Memory or ISM?

2.2: Platform Specific Issues - NT/2000

2.2.1 How to Start ASE on Remote NT Servers
2.2.2 How to Configure More than 2G bytes of Memory for ASE on NT
2.2.3 Installation Issues

2.3: Platform Specific Issues - Linux

2.3.1 ASE on Linux FAQ

3: DBCC's

3.1 How do I set TS Role in order to run certain DBCCs...?
3.2 What are some of the hidden/trick DBCC commands?
3.3 Other sites with DBCC information.
3.4 Fixing a Munged Log

Performing any of the above may corrupt your ASE installation. Please do
not call Sybase Technical Support after screwing up ASE. Remember, always
take a dump of the master database and any other databases that are to be

4: isql

4.1 How do I hide my password using isql?
4.2 How do I remove row affected and/or dashes when using isql?
4.3 How do I pipe the output of one isql to another?
4.4 What alternatives to isql exist?
4.5 How can I make isql secure?

5: bcp

5.1 How do I bcp null dates?
5.2 Can I use a named pipe to bcp/dump data out or in?
5.3 How do I exclude a column?

6.1: SQL Fundamentals

6.1.1 Are there alternatives to row at a time processing?
6.1.2 When should I execute an sp_recompile?
6.1.3 What are the different types of locks and what do they mean?
6.1.4 What's the purpose of using holdlock?
6.1.5 What's the difference between an update in place versus a deferred
update? - see Q1.5.9
6.1.6 How do I find the oldest open transaction?
6.1.7 How do I check if log truncation is blocked?
6.1.8 The timestamp datatype
6.1.9 Stored Procedure Recompilation and Reresolution
6.1.10 How do I manipulate binary columns?
6.1.11 How do I remove duplicate rows from a table?

6.2: SQL Advanced

6.2.1 How to emulate the Oracle decode function/crosstab
6.2.2 How to implement if-then-else within a select-clause.
6.2.3 deleted due to copyright hassles with the publisher
6.2.4 How to pad with leading zeros an int or smallint.
6.2.5 Divide by zero and nulls.
6.2.6 Convert months to financial months.
6.2.7 Hierarchy traversal - BOMs.
6.2.8 Is it possible to call a UNIX command from within a stored
procedure or a trigger?
6.2.9 Information on Identities and Rolling your own Sequential Keys
6.2.10 How can I execute dynamic SQL with ASE
6.2.11 Is it possible to concatenate all the values from a column and
return a single row?
6.2.12 Selecting rows N to M without Oracle's rownum?
6.2.13 How can I return number of rows that are returned from a grouped
query without using a temporary table?

6.3: Useful SQL Tricks

6.3.1 How to feed the result set of one stored procedure into another.
6.3.2 Is it possible to do dynamic SQL before ASE 12?

7: Open Client

7.1 What is Open Client?
7.2 What is the difference between DB-lib and CT-lib?
7.3 What is this TDS protocol?
7.4 I have upgraded to MS SQL Server 7.0 and can no longer connect from
Sybase's isql.
7.5 The Basics of Connecting to Sybase
7.6 Connecting to ASE using ODBC
7.7 Which version of Open Client works with which ASE?
7.8 How do I tell the version of Open Client I am running?

9: Freeware

9.0 Where is all the code and why does Section 9 suddenly load in a
reasonable amount of time?

Stored Procedures

9.1.1 sp_freedevice - lists device, size, used and free.
9.1.2 sp_dos - This procedure graphically displays the scope of a
9.1.3 sp_whodo - augments sp_who by including additional columns: cpu,
9.1.4 sp__revroles - creates DDL to sp_role a mirror of your SQL
9.1.5 sp__rev_configure - creates DDL to sp_configure a mirror of your
SQL Server
9.1.6 sp_servermap - overview of your SQL Server
9.1.7 sp__create_crosstab - simplify crosstable queries
9.1.8 sp_ddl_create_table - creates DDL for all user tables in the
current database
9.1.9 sp_spaceused_table
9.1.10 SQL to determine the space used for an index.
9.1.11 sp_helpoptions - Shows what options are set for a database.
9.1.12 sp_days - returns days in current month.
9.1.13 sp__optdiag - optdiag from within isql
9.1.14 sp_desc - a simple list of a tables' columns
9.1.15 sp_lockconfig - Displays locking schemes for tables.

Shell Scripts

9.2.1 SQL and sh(1)to dynamically generate a dump/load database
9.2.2 update statistics script


9.3.1 SybPerl - Perl interface to Sybase.
9.3.2 - Sybperl script to reverse engineer a database.
9.3.3 - creates insert DDL for a table.
9.3.4 - converts

12: Miscellany

12.1 What can Sybase IQ do for me?
12.2 Net-review of Sybase books
12.3 email lists
12.4 Finding Information at Sybase


Adaptive Server Anywhere

0.0 Preamble
0.1 What is ASA?
0.2 On what platforms is ASA supported?
0.3 What applications is ASA good for?
0.4 When would I choose ASA over ASE?
0.5 Does ASA Support Replication?
0.6 What is ASA UltraLite?
0.7 Links for further information


Introduction to Replication Server

1.1 Introduction
1.2 Replication Server Components
1.3 What is the Difference Between SQL Remote and Replication

Replication Server Administration

2.1 How can I improve throughput?
2.2 Where should I install replication server?
2.3 Using large raw partitions with Replication Server on Unix.
2.4 How to replicate col = col + 1
2.5 What is the difference between an LTMs an a RepAgent?
2.6 Which Should I choose, RepAgent or LTM?

Replication Server Trouble Shooting

3.1 Why am I running out of locks on the replicate side?
3.2 Someone was playing with replication and now the transaction log
on OLTP is filling.

Additional Information/Links

4.1 Links
4.2 Newsgroups

David Owen

Jan 21, 2004, 4:26:01 AM1/21/04
Archive-name: databases/sybase-faq/part2

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Sybase Frequently Asked Questions

Sybase FAQ Home PageAdaptive Server Enterprise FAQAdaptive Server Anywhere FAQ

Repserver FAQSearch the FAQ

Adaptive Server Anywhere

0.0 Preamble
0.1 What is ASA?
0.2 On what platforms is ASA supported?
0.3 What applications is ASA good for?
0.4 When would I choose ASA over ASE?
0.5 Does ASA Support Replication?
0.6 What is ASA UltraLite?
0.7 Links for further information


0.0 Preamble

I make no claims to be an ASA expert! I am beginning to use it more and more,
and as I use it I am able to add stuff with more authority to this list. All
of what is here is very general. I am pressing people to help write some more
meaty parts. There is nothing here on how to recover from crashes that must
happen, or equivalent sections for those in the the ASE part. Performance and
Tuning would be a good section! If anyone out there knows of a good ASA faq,
then send it to me, and I will get it added. This is a resource that will help
us all. Come on all you TeamSybase/TeamPowerbuilder people, you must know
something on the subject <g>. It is unlikely that this is going to grow into a
particularly useful resource unless I get some serious help!


0.1 What is ASA?

ASA is a fully featured DBMS with transactional integrity, automatic rollback
and recovery, declarative RI, triggers and stored procedures.

While it comes out of Sybase's "Mobile and Embedded" division, it is NOT
limited to "small, desktop applications". There are many ASA implementations
supporting over 100 concurrent users. While not as scalable as ASE, it does
offer SMP support and versions for various Unix flavors as well as Netware and
NT/w2k. Multi-gigabyte databases are commonly used.

ASA offers a number of features that are not to be found in ASE:

* row level BEFORE and AFTER triggers
* long varchar and BLOB up to 2 gigabytes
* varchar up to 32k
* declarative RI with cascade actions
* all character and decimal data is stored var-len, using only the space
it needs

ASA is designed to be low-maintenance:

* File size automatically grows
* self-tuning
* re-uses space from deletes

ASA also includes:

* Java stored procs
* Stored procedure debugger (I am not sure what sort of debugger, just that
it has one.)


0.2 On what platforms is ASA supported?


* Windows 95/98/ME, NT, 2000, CE
* Novell NetWare
* Solaris/SPARC
* Solaris/Intel
* Linux (RedHat)


0.3 What applications is ASA good for?

ASA seems to have a number of niches. It is generally good at OLTP and can be
used as a basis for a general database project. There are certainly examples
of implementations supporting 100 or more users.

A major area for ASA databases is with applications that need to distribute the
database with the application as a general storage area for internal
components, but the database is not a major part of the deliverable. Sybase
themselves have done this with the IQ meta data storage. Prior to release 11
of IQ, the meta data was stored in an ASE database. Now, with IQ 12, the meta
data has moved to being stored in ASA. This makes the installation of IQ into
production environments much simpler.

ASA has excellent ODBC support, which makes it very attractive to tools
oriented towards ODBC.


0.4 When would I choose ASA over ASE?

* Ease of administration,e.g., self-tuning optimizer, db file is an OS file
(not partition).
* Lower footprint - runs on "smaller" machines.
* Lower cost, ASA is definitely cheaper than ASE on the same platform.
* Want to use SQL Remote (asynchronous replication)
* More complete SQL92 implementation.


0.5 Does ASA Support Replication?

In short, yes. ASA comes with SQL Remote, an asynchronous replication server.
SQL Remote is intended to be used in applications where the replication is
not intended to happen immediately. In fact, it may well be hours or even days
before the databases are synchronised. This makes it ideal for the roaming
salesman type apps where the guy is on the road all day and then dials in from
home, hotel or beach front to re-synch his pay price list with the master


0.6 What is ASA UltraLite?

UltraLite is a version of ASA that runs on handheld devices.

Windows 95/98, NT, 2000, CE
Palm Computing platform
WindRiver VxWorks
Symbian EPOC


0.7 I'm interested, where can I find more info?

Breck Carter has a very useful page at that is
full of detail.

General information can be found about ASA at:

It is a bit of a marketing page but there are some pointers to white papers

A very well written reviewers guide can be found at

The page has a link to a pdf document that contains lots of useful information.

David Owen

Jan 21, 2004, 4:26:02 AM1/21/04
Archive-name: databases/sybase-faq/part3

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Sybase Frequently Asked Questions

Sybase FAQ Home PageAdaptive Server Enterprise FAQAdaptive Server Anywhere FAQ

Repserver FAQSearch the FAQ

Sybase Replication Server

1. Introduction to Replication Server
2. Replication Server Administration
3. Troubleshooting Replication Server
4. Additional Information/Links

Introduction to Replication Server

1.1 Introduction
1.2 Replication Server Components
1.3 What is the Difference Between SQL Remote and Replication Server?

Thanks go to Manish I Shah for major help with this introduction.

next prev ASE FAQ


1.1 Introduction


What is Replication Server

Replication Server moves transactions (insert, updates and deletes) at the
table level from a source dataserver to one or more destination dataservers.
The dataserver could be ASE or other major DBMS flavour (including DB2,
Informix, Oracle). The source and destinations need not be of the same type.

What can it do ?

* Move data from one source to another.
* Move only a subset of data from source to destination. So, you can
subscribe to a subset of data, or a subset of the columns, in the source
table, e.g. select * from clients where state = NY
* Manipulation/transformation of data when moving from source to destination.
E.g. it can map data from a data-type in DB2 to an equivalent in Sybase.*
* Provide a warm-standby system. Can be incorporated with Open Switch to
provide a fairly seamless fail-over environment.
* Merge data from several source databases into one destination database
(could be for a warehouse type environment for example).
* Move data through a complicated network down to branch offices, say, only
sending the relevant data to each branch.

(* This is one of Sybase replication's real strengths, the ability to define
function string classes which allow the conversion of statements from one SQL
dialect to match the dialect of the destination machine. Ed)

How soon does the data move

The data moves asynchronously. The time it takes to reach the destination
depends on the size of your transaction, level of activity in that particular
database (a database as in Sybase systems), the length of the chain (one or
more replication servers that the transaction has to pass through to reach the
destination), the thickness of pipe (network), how busy your replication server
is etc. Usually, on a LAN, for small transactions, this is about a second.

Back to top


1.2 Replication Server Components



Primary Dataserver

The source of data where client applications enter/delete and modify data. As
mentioned before, this need not be ASE, it can be Microsoft SQL Server, Oracle,
DB2, Informix. (I know that I should get a complete list.)

Replication Agent/Log Transfer Manager

Log Transfer Manager (LTM) is a separate program/process which reads
transaction log from the source server and transfers them to the replication
server for further processing. With ASE 11.5, this has become part of ASE and
is now called the Replication Agent. However, you still need to use an LTM for
non-ASE sources. I imagine there is a version of LTM for each kind of source
(DB2, Informix, Oracle etc). When replication is active, you see one
connection per each replicated database in the source dataserver (sp_who).

Replication Server (s)

The replication server is an Open Server/Open Client application. The server
part receives transactions being sent by either the source ASE or the source
LTM. The client part sends these transactions to the target server which could
be another replication server or the final dataserver. As far as I know, the
server does not include the client component of any of the other DBMSes out of
the box.

Replicate (target) Dataserver

Server in which the final replication server (in the queue) will repeat the
transaction done on the primary. You will see a connection, one for each target
database, in the target dataserver when the replication server is actively
transferring data (when idle, the replication server disconnects or fades out
in replication terminology).

Back to top


1.3 What is the Difference Between Replication Server and SQL Remote?


Both SQL Remote and Replication Server perform replication. SQL Remote was
originally part of the Adaptive Server Anywhere tool kit and is intended for
intermittent replication. (The classic example is that of a salesman
connecting on a daily basis to upload sales and download new prices and
inventory.) Replication Server is intended for near real-time replication

Back to top


next prev ASE FAQ

Replication Server Administration

2.1 How can I improve throughput?
2.2 Where should I install replication server?
2.3 Using large raw partitions with Replication Server on Unix.
2.4 How to replicate col = col + 1
2.5 What is the difference between an LTMs an a RepAgent?
2.6 Which Should I choose, RepAgent or LTM?

next prev ASE FAQ


2.1 How can I improve throughput?


Check the Obvious

First, ensure that you are only replicating those parts of the system that need
to be replicated. Some of this is obvious. Don't replicate any table that
does not need to be replicated. Check that you are only replicating the
columns you need. Replication is very sophisticated and will allow you to
replicate both a subset of the columns as well as a subset of the rows.

Replicate Minimum Columns

Once the replication is set up and synchronised, it is only necessary to
replicate those parts of the primary system that actually change. You are only
replicating those rows and columns that need to be replicated, but you only
need to replicate the actual changes. Check that each replication definition
is defined using the clause:

create replication definition rep_def_name
with primary...
replicate minimal columns

Second Replication Server

This might be appropriate in a simple environment on systems with spare cycles
and limited space on the network. When Sybase replicates from a primary to a
replicate using only one replication server the data is transferred across the
network uncompressed. However, the communication between two replication
servers is compressed. By installing a second replication server it is
possible to dramatically reduce the bandwidth needed to replicate your data.

Dedicated Network Card

Obviously, if replication is sharing the same network resources that all of the
clients are using, there is the possibility for a bottleneck if the network
bandwidth is close to saturation. If a second replication server is not going
to cut it since you already have one or there are no spare cycles, then a
second network card may be the answer.

First, you will need to configure ASE to listen on two network connections.
This is relatively straightforward. There is no change to the client
configuration. They all continue to talk to Sybase using the same connection.
When defining the replication server, ensure that the interfaces/sql.ini entry
that it uses only has the second connection in it. This may involve some
jiggery pokery with environment variables, but should be possible, even on NT!
You need to be a little careful with network configuration. Sybase will
communicate with the two servers on the correct address, but if the underlying
operating system believes that both clients and repserver can be serviced by
the same card, then it will use the first card that it comes to. So, if you
had the situation that all of the clients, ASE and the replication server were
on, and the host running ASE had two cards onto this same segment,
then it would choose to route all packets through the first card. OK, so this
is a very simplistic error to correct, but similar things can happen with more
convoluted and, superficially, better thought out configurations.

+---------+ +-----------+ +-----------+
| |--> NE(1) --> All Clients... | | | |
| Primary | | repserver | | replicate |
| |--> NE(2) --------------------->| |-->| |
| | | | | |
+---------+ +-----------+ +-----------+

So, configure NE(1) to be on, say, and NE(2) to be on
and all should be well. OK, so my character art is not perfect, but I think
that you get the gist!

No Network Card

If RepServer resides on the same physical machine as either the primary or the
replicate, it is possible to use the localhost or loopback network device. The
loopback device is a network interface that connects back to itself without
going through the network interface card. It is almost always uses the IP
address So, by applying the technique described above, but instead
of using a dedicated network card, you use the loopback device. Obviously, the
two servers have to be on the same physical machine or it won't work!

Back to top


2.2 Where should I install replication server?


A seemingly trivial question, but one that can cause novices a bit of worry.

There are three answers: on the primary machine, on the replicate machine or on
a completely separate machine. There is no right answer, and if you are doing
an initial install it probably pays to consider the future, consider the
proposed configuration and have a look at the load on the available machines.

It is probably fair to say that replication is not power hungry but neither is
it free. If the primary is only just about coping with its current load, then
it might be as well looking into hosting it on another machine. The argument
applies to the replicate. If you think that network bandwidth may be an issue,
and you may have to add a second replication server, you may be better off
starting with repserver running on the primary. It is marginally easier to add
a repserver to an existing configuration if the first repserver is on the

Remember that a production replication server on Unix will require raw devices
for the stable devices and that these can be more than 2GB in size. If you are
restricted in the number of raw partitions you have available on a particular
machine, then this may have a bearing. See Q2.3.

Installing replication server on its own machine will, of course, introduce all
sorts of problems of its own, as well as answering some. The load on the
primary or the replicate is reduced considerably, but you are definitely going
to add some load to the network. Remember that ASE->Rep and Rep->ASE is
uncompressed. It is only Rep->Rep that is compressed.

Back to top


2.3 Using large raw partitions with Replication Server on Unix.


It is a good practice with production installations of Replication Server on
Unix that you use raw partitions for the stable devices. This is for just the
same reason that production ASE's use raw partitions. Raw devices can be a
maximum of 2GB with replication server up to release 11.5. (I have not checked

In order to utilise a raw partition that is greater than 2GB in size you can do
the following (remember all of the cautionary warnings about trying this sort
of stuff out in development first!):

add partition firstpartition on '/dev/rdsk/c0t0d0s0' with size 2024
add partition secondpartition on '/dev/rdsk/c0t0d0s0' with size 2024
starting at 2048

Notice that the initial partition is sized at 2024MB and not 2048. I have not
found this in the documentation, but replication certainly seems to have a
problem allocating a full 2GB. Interestingly, do the same operation through
Rep Server Manager and Sybase central caused no problems at all.

Back to top


2.4 How to replicate col = col + 1


Firstly. While the rule that you never update a primary key may be a
philosophical choice in a non-replicated system, it is an architectural
requirement of a replicated system.

If you use simple data replication, and your primary table is:


and you issue a:

update table set id=id+1

Rep server will do this in the replicate:

begin tran
update table set id=2 where id=1
update table set id=3 where id=2
update table set id=4 where id=3
commit tran

Hands up all who can see a bit of a problem with this! Remember, repserver
doesn't replicate statements, it replicates the results of statements.

One way to perform this update is to build a stored procedure on both sides
that executes the necessary update and replicate the stored procedure call.

Back to top


2.5 What is the difference between an LTM and a RepAgent?


As described in Section 1.2, Log Transfer Managers (LTMs) and RepAgents are the
processes that transfer data between ASE and the Replication Server.

LTMs were delivered with the first releases of Replication Server. Each LTM is
a separate process at the operating system level that runs along side ASE and
Replication Server. As with ASE and Replication Server, a RUN_<ltm_server> and
configuration file is required for each LTM. One LTM is required for each
database being replicated.

Along with ASE 11.5 a new concept was introduced, that of RepAgent. I am not
sure if you needed to use RepServer 11.5 as well, or whether the RepAgents
could talk to earlier versions of Replication Server. Each RepAgent is, in
effect, a slot-in replacement for an LTM. However, instead of running as
separate operating system process, it runs as a thread within ASE. Pretty much
all of the requirements for replication using an LTM apply to the RepAgents.
One per database being replicated, etc. but now you do not need to have
separate configuration files.

Back to top


2.6 Which should I use, RepAgent or LTM?


The differences between RepAgents and LTMs are discussed in Section 2.5.
Which then to choose. There are pros and cons to both, however, I think that
it should be stated up front that RepAgents are the latest offering and I
believe that Sybase would expect you you to use that. Certainly the
documentation for LTMs is a little buried implying that they do not consider it
to be as current as LTMs.

LTM Cons:

* Older technology. Not sure if it is being actively supported.
* Not integrated within ASE, so there is a (small) performance penalty.
* Separate processes, so need additional monitoring in production

LTM Pros:

* Possible to restart LTM without having to restart ASE.

RepAgent Cons

* If it crashes it is possible that you will have to restart ASE in order to
restart RepAgent.

RepAgent Pros

* Latest, and presumably greatest, offering.
* Tightly integrated with ASE so good performance.
* Less to manage, no extra entries in the interfaces file.

Back to top


next prev ASE FAQ

Replication Server Trouble Shooting

3.1 Why am I running out of locks on the replicate side?
3.2 Someone was playing with replication and now the transaction log on
OLTP is filling.

next prev ASE FAQ


3.1 Why am I running out of locks on the replicate side?


Sybase replication works by taking each transaction that occurs in the primary
dataserver and applying to the replicate. Since replication works on the
transaction log, a single, atomic, update on the primary side that updates a
million rows will be translated into a million single row updates. This may
seem very strange but is a simple consequence of how it works. On the primary,
this million row update will attempt to escalate the locks that it has taken
out to an exclusive table lock. However, on the replicate side each row is
updated individually, much as if they were being updated within a cursor loop.
Now, Sybase only tries to escalate locks from a single atomic statement (see
ASE Qx.y), so it will never try to escalate the lock. However, since the
updates are taking place within a single transaction, Sybase will need to take
out enough page locks to lock the million rows.

So, how much should you increase the locks parameter on the replicate side? A
good rule of thumb might be double it or add 40,000 whichever is the larger.
This has certainly worked for us.

Back to top


3.2 Someone was playing with replication and now the transaction log on OLTP
is filling.


Once replication has been configured, ASE adds another marker to the
transaction log. The first marker is the conventional one that marks which
transactions have had their data written to disk. The second is there to
ensure that the transactions have also been replicated. Clearly, if someone
installed replication and did not clean up properly after themselves, this
marker will still be there and consequently the transaction log will be filling
up. If you are certain that replication is not being used on your system, you
can disable the secondary truncation marker with the following commands:

1> use <database>
2> go
1> dbcc settrunc(ltm, ignore)
2> go

The above code is the normal mechanism for disabling the trucation point. I
have never had a problem with it. However, an alternative mechanism for
disabling the truncation point is given below. I do not know if it will work
in situations that the previous example won't, or if it works for databases
that are damaged or what. If someone knows when you use it and why, please let
me know (

1> sp_role "grant", sybase_ts_role, sa
2> go
1> set role sybase_ts_role on
2> go
1> dbcc dbrepair(dbname, ltmignore)
2> go
1> sp_role "revoke", sybase_ts_role, sa
2> go

This scenario is also very common if you load a copy of your replicated
production database into development.

Back to top


next prev ASE FAQ

Additional Information/Links

4.1 Links
4.2 Newsgroups

next prev ASE FAQ


4.1 Links


Thierry Antinolfi has a replication FAQ at his site
dbadevil that covers a lot of good stuff.

Rob Verschoor has a 'Replication Server Tips & Tricks' section on his site, as
well as an indispensible quick reference guide!

Back to top


4.2 Newsgroups


There are a number of newsgroups that can deal with questions. Sybase have
several in their own forums area.

For Replication Server:


for SQL Remote and the issues of replicating with ASA:


and of course, there is always the ubiquitous


Back to top


next prev ASE FAQ

David Owen

Jan 21, 2004, 4:26:04 AM1/21/04
Archive-name: databases/sybase-faq/part7

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

General Troubleshooting

1. How do I turn off marked suspect on my database?
2. On startup, the transaction log of a database has filled and recovery has
suspended, what can I do?
3. Why do my page locks not get escalated to a table lock after 200 locks?

Performance and Tuning Advanced Administration ASE FAQ


1.4.1 How do I turn off marked suspect on my database?


Say one of your database is marked suspect as the SQL Server is coming up. Here
are the steps to take to unset the flag.

Remember to fix the problem that caused the database to be marked suspect
after switching the flag.

System 11

1. sp_configure "allow updates", 1
2. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") --
save this value.
3. begin transaction
4. update sysdatabases set status = -32768 where dbid = db_id("my_hosed_db")
5. commit transaction
6. shutdown
7. startserver -f RUN_*
8. fix the problem that caused the database to be marked suspect
9. begin transaction
10. update sysdatabases set status = saved_value where dbid = db_id
11. commit transaction
12. sp_configure "allow updates", 0
13. reconfigure
14. shutdown
15. startserver -f RUN_*

System 10

1. sp_configure "allow updates", 1
2. reconfigure with override
3. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") -
save this value.
4. begin transaction
5. update sysdatabases set status = -32768 where dbid = db_id("my_hosed_db")
6. commit transaction
7. shutdown
8. startserver -f RUN_*
9. fix the problem that caused the database to be marked suspect
10. begin transaction
11. update sysdatabases set status = saved_value where dbid = db_id
12. commit transaction
13. sp_configure "allow updates", 0
14. reconfigure
15. shutdown
16. startserver -f RUN_*

Pre System 10

1. sp_configure "allow updates", 1
2. reconfigure with override
3. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") -
save this value.
4. begin transaction
5. update sysdatabases set status = -32767 where dbid = db_id("my_hosed_db")
6. commit transaction
7. you should be able to access the database for it to be cleared out. If not:
1. shutdown
2. startserver -f RUN_*
8. fix the problem that caused the database to be marked suspect
9. begin transaction
10. update sysdatabases set status = saved_value where dbid = db_id
11. commit transaction
12. sp_configure "allow updates", 0
13. reconfigure

Return to top


1.4.2 On startup, the transaction log of a database has filled and recovery has

suspended, what can I do?


You might find the following in the error log:

00:00000:00001:2000/01/04 07:43:42.68 server Can't allocate space for object
'syslogs' in database 'DBbad' because 'logsegment' segment is full/has no free
extents. If you ran out of space in syslogs, dump the transaction log.
Otherwise, use ALTER DATABASE or sp_extendsegment to increase size of the
00:00000:00001:2000/01/04 07:43:42.68 server Error: 3475, Severity: 21, State:
00:00000:00001:2000/01/04 07:43:42.68 server There is no space available in
SYSLOGS for process 1 to log a record for which space has been reserved. This
process will retry at intervals of one minute. The internal error number is -4.

which can prevent ASE from starting properly. A neat solution from Sean Kiely
( of Sybase Technical Support, that works if the database
has any "data only" segments. Obviously this method does not apply to the
master database. The Sybase Trouble Shooting Guide has very good coverage of
recovering the master database.

1. You will have to bring the server up with trace flag 3608 to prevent the
recovery of the user databases.
2. sp_configure "allow updates",1
3. Write down the segmap entries from the sysusages table for the toasted
4. update sysusages
set segmap = 7
where dbid = db_id("my_toasted_db")
and segmap = 3
5. select status - 320
from sysdatabases
where dbid = db_id("my_toasted_db") -- save this value.
begin transaction
update sysdatabases set status = -32768 where dbid = db_id("my_toasted_db")
go -- if all is OK, then...
commit transaction
6. Restart the server without the trace flag. With luck it should now have
enough space to recover. If it doesn't, you are in deeper trouble than
before, you do have a good, recent backup don't you?
7. dump database my_toasted_db with truncate_only
8. Reset the segmap entries in sysusages to be those as saved in 3. above.
9. Shutdown ASE and restart. (The traceflag should have gone at step 6., but
ensure that it is not there!)

Return to top


1.4.3: Why do my page locks not get escalated to a table lock after 200 locks?


Several reasons why this may be happening.

* Are you doing the updates from within a cursor?

The lock promotion only happens if you are attempting to take out 200 locks
in a single operation ie a single insert, update or delete. If you
continually loop over a table using a cursor, locking one row at time, the
lock promotion never fires. Either use an explicit mechanism to lock the
whole table, if that is required, or remove the cursor replacing it with an
appropriate join.

* A single operation is failing to escalate?

Even if you are performing a single insert, update or delete, Sybase only
attempts to lock the whole table when the lock escalation point is
reached. If this attempt fails because there is another lock which
prevents the escalation, the attempt is aborted and individual page locking

Return to top


Performance and Tuning Advanced Administration ASE FAQ

David Owen

Jan 21, 2004, 4:26:05 AM1/21/04
Archive-name: databases/sybase-faq/part10

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

1.5.9: You and showplan output


As recently pointed out in the Sybase-L list, the showplan information that was
here is terribly out of date. It was written back when the output from ASE and
MS SQL Server were identical. (To see just how differenet they have become,
have a look at the O'Reilly book "Transact-SQL Programming". It does a line for
line comparison.) The write up in the Performance and Tuning Guide is
excellent, and this section was doing nothing but causing problems.

If you do have a need for the original document, then it can be found here, but
it will no longer be considered part of the official FAQ.

Back to top


1.5.10: Poor man's sp_sysmon


This is needed for System 10 and Sybase 4.9.2 where there is no sp_sysmon
command available.

Fine tune the waitfor for your application. You may need TS Role -- see Q3.1.

use master
dbcc traceon(3604)
dbcc monitor ("clear", "all", "on")
waitfor delay "00:01:00"
dbcc monitor ("sample", "all", "on")
dbcc monitor ("select", "all", "on")
dbcc traceon(8399)
select field_name, group_name, value
from sysmonitors
dbcc traceoff(8399)
dbcc traceoff(3604)

Back to top


1.5.11: View MRU-LRU procedure cache chain


dbcc procbuf gives a listing of the current contents of the procedure cache. By
repeating the process at intervals it is possible to watch procedures moving
down the MRU-LRU chain, and so to see how long procedures remain in cache. The
neat thing about this approach is that you can size your cache according to
what is actually happening, rather than relying on estimates based on
assumptions that may not hold on your site.

To run it:

dbcc traceon(3604)
dbcc procbuf

If you use sqsh it's a bit easier to grok the output:

dbcc traceon(3604);
dbcc procbuf;|fgrep <pbname>

See Q1.5.7 regarding procedure cache sizing.

Back to top


1.5.12: Improving Text/Image Type Performance


If you know that you are going to be using a text/insert column immediately,
insert the row setting the column to a non-null value.

There's a noticeable performance gain.

Unfortunately, text and image datatypes cannot be passed as parameters to
stored procedures. The address of the text or image location must be created
and returned where it is then manipulated by the calling code. This means that
transactions involving both text and image fields and stored procedures are not
atomic. However, the datatypes can still be declared as not null in the table

Given this example -

create table key_n_text
key int not null,
notes text not null

This stored procedure can be used -

create procedure sp_insert_key_n_text
@key int,
@textptr varbinary(16) output

** Generate a valid text pointer for WRITETEXT by inserting an
** empty string in the text field.
insert key_n_text

select @textptr = textptr(notes)
from key_n_text
where key = @key

return 0

The return parameter is then used by the calling code to update the text field,
via the dbwritetext() function if using DB-Library for example.

Back to top


Server Monitoring General Troubleshooting ASE FAQ

Server Monitoring

1.6.1 What is Monitor Server and how do I configure it?
1.6.2 OK, that was easy, how do I configure a client?

Platform Specific Issues - Solaris Performance and Tuning ASE FAQ


1.6.1: How do I configure Monitor Server?


Monitor Server is a separate server from the normal dataserver. Its purpose, as
the name suggests, is to monitor ASE. It uses internal counters to determine
what is happening. On its own, it does not actually do a lot. You need to hook
up a client of some sort in order to be able to view the results.

Configuration is easy. The Sybase documentation is very good on this one for
either Unix or NT. Rather than repeat myself, go to the Sybase web site and
check out the Monitor Server User's Guide. Obviously the link should take you
to the HTML edition of the book. There is also a PDF available. Look for
"monbook.pdf". If Sybase has skipped to ASE 99.9 and this link no longer works,
then you will have to go search the Sybase home pages.

Back to top


1.6.2: OK, that was easy, how do I configure a client?


I see that you like a challenge! Syase offer a Java client to view the output
from Monitor Server. It is accessible either standalone or via the Win32
edition of Sybase Central.

Standalone on NT/2000

I could not find anything about setting up the clients in the standard
documentation set. However, there is a small paper on it here (towards the
bottom). It does miss out a couple of important details, but is helpful for all

I did not try too hard to get the 11.9.2 version running, since the 12.5
version will monitor 11.9 servers.

I do not have a boxed release of ASE 12.5 for NT, just the developers release.
This does not come with all of the necessary files. In order to run the Monitor
Client, you will need the PC Client CD that came with the boxed release. If all
you have is the developer's edition, you might be stuck. It would be worth
getting in touch with Sybase to see if they could ship you one. There is
probably a charge!

You will need to install the client software. If you have a release of ASE
already installed and running you might want to install this into a separate
area. I am not sure what files it includes and versions etc, but if you have
the space I recommend saving yourself some hassle. If you have an older edition
of ASE installed, the installation will ask if you want to overwrite two files,
mclib.dll and mchelp.dll, both of which should reside in your winnt/system32
directory. It is important that you accept both of the overwrites. The older
versions of these files do not seem to work.

Once installed, you will also need to spend some time playing with environment
variables. I have got 3 editions of ASE all running successfully on the one
machine (see Q1.3.9). I chose to have one user for each ASE instance, each with
their own local environment variables pointing to the relevant installation for
them, plus a generic account for my main user that I configured to use the
software installed from the client CD. I adjusted the variables so that each
user had their own set of variables and all of the installations worked OK.

Next, you need a copy of Java 1.1.8 installed. The client CD has a copy of JDK
1.1.8 in the "ASEP_Win32" directory. This is the one to go for, as I am sure
that it was the one that the Monitor Client was built with. I did try a version
from Sun's Java archive, but it failed.

Next, set up the JAVA_HOME environment variable. If you installed the JDK into
its default location, that will be C:\jdk1.1.8.

Check to ensure that your CLASSPATH is defined as (assuming that you installed
the client into C:\Sybase_Client):


You may want to check that the files mclib.dll and mchelp.dll exist in your
winnt/system32 directory if you were not asked to replace them earlier. You may
also want to check that the defauly Java command is correct with java -version.
It should return

java version "1.1.8"

You should now be able to fire up the main window with:

java sybase.monclt.mcgui.procact.ProcActApp 12.5 sa "sa_password" en 0 sccsen.hlp

(The paper says that you should use "jre" and not "java". That gives me a
cosistent "Class not found...". I do not know why.)

You should be presented with a screen like this, which will fill with process
information after 10 seconds. Choose "File->Monitors >" to choose a monitoring
graph. Here are a couple of screenshots from various monitors:

* Performance Summary
* Performance Trends...
* Process Current SQL Statement
* Network Activity

Obviously, all of this can be set from the command line or via a batch script.
Shove the following into a file called mon.bat and invoke using mon ASE_SERVER

java sybase.monclt.mcgui.procact.ProcActApp %1 12.5 %2 sa "%3" en 0 scssen.hlp

Obviously, you will need to replace "C:\SYBASE_CLIENT" with the correct string
pointing to your Sybase ASE installation.

Via Sybase Central on NT/2000

You will need to have installed the version of the Java Development Kit that
comes with your CD, as per standalone installation. Next, create a shortcut to
the file %SYBASE%\Sybase Central 3.2\win32\scview.exe. This is the Win 32
version of Sybase Central. Next, edit the shortcut's properties (right click on
the shortcut and select "Properties"). Now, edit the "Start In" field to be "C:
\jdk1.1.8\bin", assuming that you installed the JDK into its default location.

Now, assuming that both the ASE and Monitor servers are running, start up this
version of Sybase Central. Unlike the Java edition, all of the Servers from the
SQL.INI file are displayed at startup. Right click on the ASE server you wish
to monitor and select "Properties". This brings up a triple tabbed screen.
Select the "Monitor Server" tab and use the drop down to select the appropriate
monitor server. Now, connect to the ASE server and you will see another level
in the options tree called "Monitors". Click on it and you should see a
complete list of the monitors you can choose from. Double clicking on one
should display it. The output is exactly the same as for standalone operation.

Back to top


Platform Specific Issues - Solaris Performance and Tuning ASE FAQ

David Owen

Jan 21, 2004, 4:26:06 AM1/21/04
Archive-name: databases/sybase-faq/part13

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

SQL Advanced

6.2.1 How to emulate the Oracle decode function/crosstab
6.2.2 How to implement if-then-else within a select-clause.
6.2.3 deleted due to copyright hassles with the publisher
6.2.4 How to pad with leading zeros an int or smallint.
6.2.5 Divide by zero and nulls.
6.2.6 Convert months to financial months.
6.2.7 Hierarchy traversal - BOMs.
6.2.8 Is it possible to call a UNIX command from within a stored
procedure or a trigger?
6.2.9 Information on Identities and Rolling your own Sequential Keys
6.2.10 How can I execute dynamic SQL with ASE
6.2.11 Is it possible to concatenate all the values from a column and
return a single row?
6.2.12 Selecting rows N to M without Oracle's rownum?
6.2.13 How can I return number of rows that are returned from a grouped
query without using a temporary table?

Useful SQL Tricks SQL Fundamentals ASE FAQ


6.2.1: How to emulate the Oracle decode function/crosstab


If you are using ASE version 11.5 or later, the simplest way to implement the
Oracle decode is with the CASE statement. The following code snippet should be
compared with the example using a characteristic function given below .


However, if you have a version of ASE that does not support the case statement,
then you will have to try the following. There may be other reasons to try
characteristics functions. If you go to the Amazon web site and look for
reviews for of Rozenshteins book, Advanced SQL, you will see that one reviewer
believes that a true crosstab is not possible with the case statement. I am not
sure. I have also not done any performance tests to see which is quicker.

There is a neat way to use boolean logic to perform cross-tab or rotation
queries easily, and very efficiently. Using the aggregate 'Group By' clause in
a query and the ISNULL(), SIGN(), ABS(), SUBSTRING() and CHARINDEX() functions,
you can create queries and views to perform all kinds of summarizations.

This technique does not produce easily understood SQL statements.

If you want to test a field to see if it is equal to a value, say 100, use the
following code:

SELECT (1- ABS( SIGN( ISNULL( 100 - <field>, 1))))

The innermost function will return 1 when the field is null, a positive value
if the field < 100, a negative value if the field is > 100 and will return 0 if
the field = 100. This example is for Sybase or Microsoft SQL server, but other
servers should support most of these functions or the COALESCE() function,
which is the ANSI equivalent to ISNULL.

The SIGN() function returns zero for a zero value, -1 for a negative value, 1
for a positive value The ABS() function returns zero for a zero value, and > 1
for any non-zero value. In this case it will return 0 or 1 since the argument
is the function SIGN(), thus acting as a binary switch.

Put it all together and you get '0' if the value match, and '1' if they don't.
This is not that useful, so we subtract this return value from '1' to invert
it, giving us a TRUE value of '1' and a false value of '0'. These return values
can then be multiplied by the value of another column, or used within the
parameters of another function like SUBSTRING() to return a conditional text

For example, to create a grid from a student registration table containing
STUDENT_ID and COURSE_ID columns, where there are 5 courses (101, 105, 201,
210, 300) use the following query:

Compare this version with the case statement above.

(1- ABS( SIGN( ISNULL( 101 - COURSE_ID, 1)))) COURSE_101,
(1- ABS( SIGN( ISNULL( 105 - COURSE_ID, 1)))) COURSE_105,
(1- ABS( SIGN( ISNULL( 201 - COURSE_ID, 1)))) COURSE_201,
(1- ABS( SIGN( ISNULL( 210 - COURSE_ID, 1)))) COURSE_210,
(1- ABS( SIGN( ISNULL( 300 - COURSE_ID, 1)))) COURSE_300

Back to top


6.2.2: How to implement if-then-else in a select clause


ASE 11.5 introduced the case statement, which can be used to replace a lot of
this 'trick' SQL with more readable (and standard) code. With a case statement,
an if then else is as easy as:

declare @val char(20)
select @val = 'grand'

select case when @val = 'small' then

However, quite a number of people are still using pre-11.5 implementations,
including those people using the free Linux release. In that case you
can use the following recipe.

To implement the following condition in a select clause:

if @val = 'small' then
print 'petit'
print 'grand'

in versions of ASE prior to 11.5 do the following:

select isnull(substring('petit', charindex('small', @val), 255), 'grand')

To test it out, try this:

declare @val char(20)
select @val = 'grand'
select isnull(substring('petit', charindex('small', @val), 255), 'grand')

This code is not readily understandable by most programmers, so remember to
comment it well.

Back to top


6.2.3: Removed


6.2.4: How to pad with leading zeros an int or smallint.


By example:

declare @Integer int

/* Good for positive numbers only. */
select @Integer = 1000

select "Positives Only" =
right( replicate("0", 12) + convert(varchar, @Integer), 12)

/* Good for positive and negative numbers. */
select @Integer = -1000

select "Both Signs" =
substring( "- +", (sign(@Integer) + 2), 1) +
right( replicate("0", 12) + convert(varchar, abs(@Integer)), 12)

select @Integer = 1000

select "Both Signs" =
substring( "- +", (sign(@Integer) + 2), 1) +
right( replicate("0", 12) + convert(varchar, abs(@Integer)), 12)


Produces the following results:

Positives Only

Both Signs

Both Signs

Back to top


6.2.5: Divide by zero and nulls


During processing, if a divide by zero error occurs you will not get the answer
you want. If you want the result set to come back and null to be displayed
where divide by zero occurs do the following:

1> select * from total_temp
2> go
field1 field2
----------- -----------
10 10
10 0

(3 rows affected)
1> select field1, field1/(field2*convert(int,
substring('1',1,abs(sign(field2))))) from total_temp
2> go
----------- -----------
10 1

Back to top


6.2.6: Convert months to financial months


To convert months to financial year months (i.e. July = 1, Dec = 6, Jan = 7,
June = 12 )

Method #1

select ... ((sign(sign((datepart(month,GetDate())-6) * -1)+1) *
(datepart(month, GetDate())+6))
+ (sign(sign(datepart(month, GetDate())-7)+1) *
(datepart(month, GetDate())-6)))
from ...

Method #2

select charindex(datename(month,getdate()),
" July August September October November December
January Febuary March April May June "
) / 10

In the above example, the embedded blanks are significant.

Back to top


David Owen

Jan 21, 2004, 4:26:03 AM1/21/04
Archive-name: databases/sybase-faq/part5

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

User Database Administration

1.2.1 Changing varchar(m) to varchar(n)
1.2.2 Frequently asked questions on Table partitioning
1.2.3 How do I manually drop a table?
1.2.4 Why not create all my columns varchar(255)?
1.2.5 What's a good example of a transaction?
1.2.6 What's a natural key?
1.2.7 Making a Stored Procedure invisible
1.2.8 Saving space when inserting rows monotonically
1.2.9 How to compute database fragmentation
1.2.10 Tasks a DBA should do...
1.2.11 How to implement database security
1.2.12 How to shrink a database
1.2.13 How do I turn on auditing of all SQL text sent to the server
1.2.14 sp_helpdb/sp_helpsegment is returning negative numbers

Advanced Administration Basic Administration ASE FAQ


1.2.1: Changing varchar(m) to varchar(n)


Before you start:

select max(datalength(column_name))
from affected_table

In other words, please be sure you're going into this with your head on

How To Change System Catalogs

This information is Critical To The Defense Of The Free World, and you would be
Well Advised To Do It Exactly As Specified:

use master

sp_configure "allow updates", 1

reconfigure with override /* System 10 and below */
use victim_database
select name, colid
from syscolumns
where id = object_id("affected_table")
begin tran
update syscolumns
set length = new_value
where id = object_id("affected_table")
and colid = value_from_above
update sysindexes
set maxlen = maxlen + increase/decrease?
where id=object_id("affected_table")
and indid = 0
/* check results... cool? Continue... else rollback tran */
commit tran
use master

sp_configure "allow updates", 0

reconfigure /* System 10 and below */

Return to top


1.2.2: FAQ on partitioning


Index of Sections

* What Is Table Partitioning?
+ Page Contention for Inserts
+ I/O Contention
+ Caveats Regarding I/O Contention
* Can I Partition Any Table?
+ How Do I Choose Which Tables To Partition?
* Does Table Partitioning Require User-Defined Segments?
* Can I Run Any Transact-SQL Command on a Partitioned Table?
* How Does Partition Assignment Relate to Transactions?
* Can Two Tasks Be Assigned to the Same Partition?
* Must I Use Multiple Devices to Take Advantage of Partitions?
* How Do I Create A Partitioned Table That Spans Multiple Devices?
* How Do I Take Advantage of Table Partitioning with bcp in?
* Getting More Information on Table Partitioning

What Is Table Partitioning?

Table partitioning is a procedure that creates multiple page chains for a
single table.

The primary purpose of table partitioning is to improve the performance of
concurrent inserts to a table by reducing contention for the last page of a
page chain.

Partitioning can also potentially improve performance by making it possible to
distribute a table's I/O over multiple database devices.

Page Contention for Inserts

By default, ASE stores a table's data in one double-linked set of pages called
a page chain. If the table does not have a clustered index, ASE makes all
inserts to the table in the last page of the page chain.

When a transaction inserts a row into a table, ASE holds an exclusive page lock
on the last page while it inserts the row. If the current last page becomes
full, ASE allocates and links a new last page.

As multiple transactions attempt to insert data into the table at the same
time, performance problems can occur. Only one transaction at a time can obtain
an exclusive lock on the last page, so other concurrent insert transactions
block each other.

Partitioning a table creates multiple page chains (partitions) for the table
and, therefore, multiple last pages for insert operations. A partitioned table
has as many page chains and last pages as it has partitions.

I/O Contention

Partitioning a table can improve I/O contention when ASE writes information in
the cache to disk. If a table's segment spans several physical disks, ASE
distributes the table's partitions across fragments on those disks when you
create the partitions.

A fragment is a piece of disk on which a particular database is assigned space.
Multiple fragments can sit on one disk or be spread across multiple disks.

When ASE flushes pages to disk and your fragments are spread across different
disks, I/Os assigned to different physical disks can occur in parallel.

To improve I/O performance for partitioned tables, you must ensure that the
segment containing the partitioned table is composed of fragments spread across
multiple physical devices.

Caveats Regarding I/O Contention

Be aware that when you use partitioning to balance I/O you run the risk of
disrupting load balancing even as you are trying to achieve it. The following
scenarios can keep you from gaining the load balancing benefits you want:

* You are partitioning an existing table. The existing data could be sitting
on any fragment. Because partitions are randomly assigned, you run the risk
of filling up a fragment. The partition will then steal space from other
fragments, thereby disrupting load balancing.
* Your fragments differ in size.
* The segment maps are configured such that other objects are using the
fragments to which the partitions are assigned.
* A very large bcp job inserts many rows within a single transaction. Because
a partition is assigned for the lifetime of a transaction, a huge amount of
data could go to one particular partition, thus filling up the fragment to
which that partition is assigned.

Can I Partition Any Table?

No. You cannot partition the following kinds of tables:

1. Tables with clustered indexes (as of release 11.5 it is possible to have a
clustered index on a partitioned table)
2. ASE system tables
3. Work tables
4. Temporary tables
5. Tables that are already partitioned. However, you can unpartition and then
re-partition tables to change the number of partitions.

How Do I Choose Which Tables To Partition?

You should partition heap tables that have large amounts of concurrent insert
activity. (A heap table is a table with no clustered index.) Here are some

1. An "append-only" table to which every transaction must write
2. Tables that provide a history or audit list of activities
3. A new table into which you load data with bcp in. Once the data is loaded
in, you can unpartition the table. This enables you to create a clustered
index on the table, or issue other commands not permitted on a partition

Does Table Partitioning Require User-Defined Segments?

No. By design, each table is intrinsically assigned to one segment, called the
default segment. When a table is partitioned, any partitions on that table are
distributed among the devices assigned to the default segment.

In the example under "How Do I Create A Partitioned Table That Spans Multiple
Devices?", the table sits on a user-defined segment that spans three devices.

Can I Run Any Transact-SQL Command on a Partitioned Table?

No. Once you have partitioned a table, you cannot use any of the following
Transact-SQL commands on the table until you unpartition it:

1. drop table
2. sp_placeobject
3. truncate table
4. alter table table_name partition n

On releases of ASE prior to 11.5 it was not possible to create a clustered
index on a partitioned table either.

How Does Partition Assignment Relate to Transactions?

A user is assigned to a partition for the duration of a transaction. Assignment
of partitions resumes with the first insert in a new transaction. The user
holds the lock, and therefore partition, until the transaction ends.

For this reason, if you are inserting a great deal of data, you should batch it
into separate jobs, each within its own transaction. See "How Do I Take
Advantage of Table Partitioning with bcp in?", for details.

Can Two Tasks Be Assigned to the Same Partition?

Yes. ASE randomly assigns partitions. This means there is always a chance that
two users will vie for the same partition when attempting to insert and one
would lock the other out.

The more partitions a table has, the lower the probability of users trying to
write to the same partition at the same time.

Must I Use Multiple Devices to Take Advantage of Partitions?

It depends on which type of performance improvement you want.

Table partitioning improves performance in two ways: primarily, by decreasing
page contention for inserts and, secondarily, by decreasing i/o contention.
"What Is Table Partitioning?" explains each in detail.

If you want to decrease page contention you do not need multiple devices. If
you want to decrease i/o contention, you must use multiple devices.

How Do I Create A Partitioned Table That Spans Multiple Devices?

Creating a partitioned table that spans multiple devices is a multi-step
procedure. In this example, we assume the following:

* We want to create a new segment rather than using the default segment.
* We want to spread the partitioned table across three devices, data_dev1,
data_dev2, and data_dev3.

Here are the steps:

1. Define a segment:

sp_addsegment newsegment, my_database,data_dev1

2. Extend the segment across all three devices:

sp_extendsegment newsegment, my_database, data_dev2
sp_extendsegment newsegment, my_database, data_dev3

3. Create the table on the segment:

create table my_table
(names, varchar(80) not null)
on newsegment

4. Partition the table:

alter table my_table partition 30

How Do I Take Advantage of Table Partitioning with bcp in?

You can take advantage of table partitioning with bcp in by following these

1. Break up the data file into multiple files and simultaneously run each of
these files as a separate bcp job against one table.

Running simultaneous jobs increases throughput.

2. Choose a number of partitions greater than the number of bcp jobs.

Having more partitions than processes (jobs) decreases the probability of
page lock contention.

3. Use the batch option of bcp in. For example, after every 100 rows, force a
commit. Here is the syntax of this command:

bcp table_name in filename -b100

Each time a transaction commits, ASE randomly assigns a new partition for
the next insert. This, in turn, reduces the probability of page lock

Getting More Information on Table Partitioning

For more information on table partitioning, see the chapter on controlling
physical data placement in the ASE Performance and Tuning Guide.

Return to top


1.2.3: How to manually drop a table


Occasionally you may find that after issuing a drop table command that the ASE
crashed and consequently the table didn't drop entirely. Sure you can't see it
but that sucker is still floating around somewhere.

Here's a list of instructions to follow when trying to drop a corrupt table:

1. sp_configure allow, 1
reconfigure with override

2. Write db_id down.
use db_name
select db_id()
3. Write down the id of the bad_table:
select id
from sysobjects
where name = bad_table_name
4. You will need these index IDs to run dbcc extentzap. Also, remember that if
the table has a clustered index you will need to run extentzap on index
"0", even though there is no sysindexes entry for that indid.
select indid
from sysindexes
where id = table_id
5. This is not required but a good idea:
begin transaction
6. Type in this short script, this gets rid of all system catalog information
for the object, including any object and procedure dependencies that may be

Some of the entries are unnecessary but better safe than sorry.

declare @obj int
select @obj = id from sysobjects where name =
delete syscolumns where id = @obj
delete sysindexes where id = @obj
delete sysobjects where id = @obj
delete sysprocedures where id in
(select id from sysdepends where depid = @obj)
delete sysdepends where depid = @obj
delete syskeys where id = @obj
delete syskeys where depid = @obj
delete sysprotects where id = @obj
delete sysconstraints where tableid = @obj
delete sysreferences where tableid = @obj
delete sysdepends where id = @obj
7. Just do it!
commit transaction
8. Gather information to run dbcc extentzap:
use master
sp_dboption db_name, read, true
use db_name
9. Run dbcc extentzap once for each index (including index 0, the data level)
that you got from above:
use master
dbcc traceon (3604)
dbcc extentzap (db_id, obj_id, indx_id, 0)
dbcc extentzap (db_id, obj_id, indx_id, 1)

Notice that extentzap runs twice for each index. This is because the
last parameter (the sort bit) might be 0 or 1 for each index, and you
want to be absolutely sure you clean them all out.

10. Clean up after yourself.
sp_dboption db_name, read, false
use db_name
sp_configure allow, 0
reconfigure with override

Return to top


1.2.4: Why not max out all my columns?


People occasionally ask the following valid question:

Suppose I have varying lengths of character strings none of which should
exceed 50 characters.

Is there any advantage of last_name varchar(50) over this last_name varchar

That is, for simplicity, can I just define all my varying strings to be
varchar(255) without even thinking about how long they may actually be? Is
there any storage or performance penalty for this.

There is no performance penalty by doing this but as another netter pointed

If you want to define indexes on these fields, then you should specify the
smallest size because the sum of the maximal lengths of the fields in the
index can't be greater than 256 bytes.

and someone else wrote in saying:

Your data structures should match the business requirements. This way the
data structure themselves becomes a data dictionary for others to model
their applications (report generation and the like).

Return to top


1.2.5: What's a good example of a transaction?


This answer is geared for Online Transaction Processing (OTLP)

To gain maximum throughput all your transactions should be in stored procedures
- see Q1.5.8. The transactions within each stored procedure should be short and
simple. All validation should be done outside of the transaction and only the
modification to the database should be done within the transaction. Also, don't
forget to name the transaction for sp_whodo - see Q9.2.

The following is an example of a good transaction:

/* perform validation */
select ...
if ... /* error */
/* give error message */
else /* proceed */
begin transaction acct_addition
update ...
insert ...
commit transaction acct_addition

The following is an example of a bad transaction:

begin transaction poor_us
update X ...
select ...
if ... /* error */
/* give error message */
else /* proceed */
update ...
insert ...
commit transaction poor_us

This is bad because:

* the first update on table X is held throughout the transaction. The idea
with OLTP is to get in and out fast.
* If an error message is presented to the end user and we await their
response, we'll maintain the lock on table X until the user presses return.
If the user is out in the can we can wait for hours.

Return to top


1.2.6: What's a natural key?


Let me think back to my database class... okay, I can't think that far so I'll
paraphrase... essentially, a natural key is a key for a given table that
uniquely identifies the row. It's natural in the sense that it follows the
business or real world need.

For example, assume that social security numbers are unique (I believe it is
strived to be unique but it's not always the case), then if you had the
following employee table:


ssn char(09)
f_name char(20)
l_name char(20)
title char(03)

Then a natural key would be ssn. If the combination of _name and l_name were
unique at this company, then another natural key would be f_name, l_name. As a
matter of fact, you can have many natural keys in a given table but in practice
what one does is build a surrogate (or artificial) key.

The surrogate key is guaranteed to be unique because (wait, get back, here it
goes again) it's typically a monotonically increasing value. Okay, my
mathematician wife would be proud of me... really all it means is that the key
is increasing linearly: i+1

The reason one uses a surrogate key is because your joins will be faster.

If we extended our employee table to have a surrogate key:


id identity
ssn char(09)
f_name char(20)
l_name char(20)
title char(03)

Then instead of doing the following:

where a.f_name = b.f_name
and a.l_name = a.l_name

we'd do this:

where =

We can build indexes on these keys and since Sybase's atomic storage unit is
2K, we can stash more values per 2K page with smaller indexes thus giving us
better performance (imagine the key being 40 bytes versus being say 4 bytes...
how many 40 byte values can you stash in a 2K page versus a 4 byte value? --
and how much wood could a wood chuck chuck, if a wood chuck could chuck wood?)

Does it have anything to do with natural joins?

Um, not really... from "A Guide to Sybase..", McGovern and Date, p. 112:

The equi-join by definition must produce a result containing two identical
columns. If one of those two columns is eliminated, what is left is called
the natural join.

Return to top


1.2.7: Making a Stored Procedure invisible


System 11.5 and above

It is now possible to encrypt your stored procedure code that is stored in the
syscomments table. This is preferred than the old method of deleting the data
as deleting will impact future upgrades. You can encrypt the text with the
sp_hidetext system procedure.

Pre-System 11.5

Perhaps you are trying to prevent the buyer of your software from defncopy'ing
all your stored procedures. It is perfectly safe to delete the syscomments
entries of any stored procedures you'd like to protect:

sp_configure "allow updates", 1

reconfigure with override /* System 10 and below */
use affected_database
delete syscomments where id = object_id("procedure_name")
use master

sp_configure "allow updates", 0


I believe in future releases of Sybase we'll be able to see the SQL that is
being executed. I don't know if that would be simply the stored procedure name
or the SQL itself.

Return to top


1.2.8: Saving space when inserting rows monotonically


If the columns that comprise the clustered index are monotonically increasing
(that is, new row key values are greater than those previously inserted) the
following System 11 dbcc tune will not split the page when it's half way full.
Rather it'll let the page fill and then allocate another page:

dbcc tune(ascinserts, 1, "my_table")

By the way, SyBooks is wrong when it states that the above needs to be reset
when ASE is rebooted. This is a permanent setting.

To undo it:

dbcc tune(ascinserts, 0, "my_table")

Return to top


1.2.9: How to compute database fragmentation



dbcc traceon(3604)
dbcc tab(production, my_table, 0)


A delta of one means the next page is on the same track, two is a short seek,
three is a long seek. You can play with these constants but they aren't that

A table I thought was unfragmented had L1 = 1.2 L2 = 1.8

A table I thought was fragmented had L1 = 2.4 L2 = 6.6

How to Fix

You fix a fragmented table with clustered index by dropping and creating the
index. This measurement isn't the correct one for tables without clustered
indexes. If your table doesn't have a clustered index, create a dummy one and
drop it.

Return to top


1.2.10: Tasks a DBA should do...


A good presentation of a DBA's duties has been made available by Jeff Garbus ( of Soaring Eagle Consulting Ltd (http:// and numerous books can be found here. These are
Powerpoint slides converted to web pages and so may be difficult to view with a
text browser!

An alternative view is catalogued below. (OK, so this list is crying out for a
bit of a revamp since checkstorage came along Ed!)

DBA Tasks
| Task | Reason | Period |
| | I consider | If your ASE permits, daily |
| | these the | before your database dumps. If |
| dbcc checkdb, | minimal | this is not possible due to |
| checkcatalog, | dbcc's to | the size of your databases, |
| checkalloc | ensure the | then try the different options |
| | integrity of | so that the end of, say, a |
| | your database | week, you've run them all. |
| Disaster recovery | Always be | |
| scripts - scripts to | prepared for | |
| rebuild your ASE in | the worst. | |
| case of hardware | Make sure to | |
| failure | test them. | |
| scripts to logically | | |
| dump your master | You can | |
| database, that is bcp | selectively | |
| the critical system | rebuild your | |
| tables: sysdatabases, | database in | Daily |
| sysdevices, syslogins, | case of | |
| sysservers, sysusers, | hardware | |
| syssegments, | failure | |
| sysremotelogins | | |
| | A system | |
| | upgrade is | After any change as well as |
| %ls -la <disk_devices> | known to | daily |
| | change the | |
| | permissions. | |
| dump the user | CYA* | Daily |
| databases | | |
| dump the transaction | CYA | Daily |
| logs | | |
| dump the master | CYA | After any change as well as |
| database | | daily |
| | This is the | |
| System 11 and beyond - | configuration | |
| save the $DSQUERY.cfg | that you've | After any change as well as |
| to tape | dialed in, | daily |
| | why redo the | |
| | work? | |
| | | Depending on how often your |
| | | major tables change. Some |
| | | tables are pretty much static |
| | | (e.g. lookup tables) so they |
| update statistics on | To ensure the | don't need an update |
| frequently changed | performance | statistics, other tables |
| tables and | of your ASE | suffer severe trauma (e.g. |
| sp_recompile | | massive updates/deletes/ |
| | | inserts) so an update stats |
| | | needs to be run either nightly |
| | | /weekly/monthly. This should |
| | | be done using cronjobs. |
| create a dummy ASE and | | |
| do bad things to it: | See disaster | When time permits |
| delete devices, | recovery! | |
| destroy permissions... | | |
| Talk to the | It's better | |
| application | to work with | As time permits. |
| developers. | them than | |
| | against them. | |
| Learn new tools | So you can | As time permits. |
| | sleep! | |
| Read | Passes the | Priority One! |
| comp.databases.sybase | time. | |

* Cover Your Ass

Return to top


1.2.11: How to implement database security


This is a brief run-down of the features and ideas you can use to implement
database security:

Logins, Roles, Users, Aliases and Groups

* sp_addlogin - Creating a login adds a basic authorisation for an account -
a username and password - to connect to the server. By default, no access
is granted to any individual databases.
* sp_adduser - A user is the addition of an account to a specific database.
* sp_addalias - An alias is a method of allowing an account to use a specific
database by impersonating an existing database user or owner.
* sp_addgroup - Groups are collections of users at the database level. Users
can be added to groups via the sp_adduser command.

A user can belong to only one group - a serious limitation that Sybase
might be addressing soon according to the ISUG enhancements requests.
Permissions on objects can be granted or revoked to or from users or

* sp_role - A role is a high-level Sybase authorisation to act in a specific
capacity for administration purposes. Refer to the Sybase documentation for


Make sure there is a unique login account for each physical person and/or
process that uses the server. Creating generic logins used by many people or
processes is a bad idea - there is a loss of accountability and it makes it
difficult to track which particular person is causing server problems when
looking at the output of sp_who. Note that the output of sp_who gives a
hostname - properly coded applications will set this value to something
meaningful (ie. the machine name the client application is running from) so you
can see where users are running their programs. Note also that if you look at
master..sysprocesses rather than just sp_who, there is also a program_name.
Again, properly coded applications will set this (eg. to 'isql') so you can see
which application is running. If you're coding your own client applications,
make sure you set hostname and program_name via the appropriate Open Client
calls. One imaginative use I've seen of the program_name setting is to
incorporate the connection time into the name, eg APPNAME-DDHHMM (you have 16
characters to play with), as there's no method of determining this otherwise.

Set up groups, and add your users to them. It is much easier to manage an
object permissions system in this way. If all your permissions are set to
groups, then adding a user to the group ensures that users automatically
inherit the correct permissions - administration is *much* simpler.

Objects and Permissions

Access to database objects is defined by granting and/or revoking various
access rights to and from users or groups. Refer to the Sybase documentation
for details.


The ideal setup has all database objects being owned by the dbo, meaning no
ordinary users have any default access at all. Specific permissions users
require to access the database are granted explicitly. As mentioned above - set
permissions for objects to a group and add users to that group. Any new user
added to the database via the group then automatically obtains the correct set
of permissions.

Preferably, no access is granted at all to data tables, and all read and write
activity is accomplished through stored procedures that users have execute
permission on. The benefit of this from a security point of view is that access
can be rigidly controlled with reference to the data being manipulated, user
clearance levels, time of day, and anything else that can be programmed via
T-SQL. The other benefits of using stored procedures are well known (see Q1.5.8
). Obviously whether you can implement this depends on the nature of your
application, but the vast majority of in-house-developed applications can rely
solely on stored procedures to carry out all the work necessary. The only
server-side restriction on this method is the current inability of stored
procedures to adequately handle text and image datatypes (see Q1.5.12). To get
around this views can be created that expose only the necessary columns to
direct read or write access.


Views can be a useful general security feature. Where stored procedures are
inappropriate views can be used to control access to tables to a lesser extent.
They also have a role in defining row-level security - eg. the underlying table
can have a security status column joined to a user authorisation level table in
the view so that users can only see data they are cleared for. Obviously they
can also be used to implement column-level security by screening out sensitive
columns from a table.


Triggers can be used to implement further levels of security - they could be
viewed as a last line of defence in being able to rollback unauthorised write
activity (they cannot be used to implement any read security). However, there
is a strong argument that triggers should be restricted to doing what they were
designed for - implementing referential integrity - rather being loaded up with
application logic.

Administrative Roles

With Sybase version 10 came the ability to grant certain administrative roles
to user accounts. Accounts can have sa-level privilege, or be restricted to
security or operator roles - see sp_role.


The use of any generic account is not a good idea. If more than one person
requires access as sa to a server, then it is more accountable and traceable if
they each have an individual account with sa_role granted.

Return to top


1.2.12: How to Shrink a Database


Warning: This document has not been reviewed. Treat it as alpha-test
quality information and report any problems and suggestions to

It has historically been difficult to shrink any database except tempdb
(because it is created fresh every boot time). The two methods commonly used
have been:

1. Ensure that you have scripts for all your objects (some tools like SA
Companion, DB Artisan or from Sybperl can create scripts from
an existing database), then bcp out your data, drop the database, recreate
it smaller, run your scripts, and bcp in your data.
2. Use a third-party tool such as DataTool's SQL Backtrack, which in essence
automates the first process.

This technote outlines a third possibility that can work in most cases.

An Unsupported Method to Shrink a Database

This process is fairly trivial in some cases, such as removing a recently added
fragment or trimming a database that has a log fragment as its final
allocation, but can also be much more complicated or time consuming than the
script and bcp method.

General Outline

The general outline of how to do it is:

1. Make a backup of the current database
2. Migrate data from sysusages fragments with high lstart values to fragments
with low lstart values.
3. Edit sysusages to remove high lstart fragments that no longer have data
4. Reboot ASE.


1. Dump your database. If anything goes wrong, you will need to recover from
this backup!
2. Decide how many megabytes of space you wish to remove from your database.
3. Examine sysusages for the database. You will be shrinking the database by
removing the fragments with the highest lstart values. If the current
fragments are not of appropriate sizes, you may need to drop the database,
recreate it so there are more fragments, and reload the dump.

A trivial case: An example of a time when you can easily shrink a
database is if you have just altered it and are sure there has been no
activity on the new fragment. In this case, you can directly delete the
last row in sysusages for the db (this row was just added by alter db)
and reboot the server and it should come up cleanly.

4. Change the segmaps of the fragments you plan to remove to 0. This will
prevent future data allocations to these fragments.

Note: If any of the fragments you are using have user defined segments
on them, drop those segments before doing this.

sp_configure "allow updates", 1

reconfigure with override -- not necessary in System 11
update sysusages
set segmap = 0
where dbid = <dbid>
and lstart = <lstart>
dbcc dbrepair(<dbname>, remap)

Ensure that there is at least one data (segmap 3) and one log (segmap 4)
fragment, or one mixed (segmap 7) fragment.

If the server has been in use for some time, you can shrink it by deleting
rows from sysusages for the db, last rows first, after making sure that no
objects have any allocations on the usages.

5. Determine which objects are on the fragments you plan to remove.
dbcc usedextents( dbid,0,0,1)

Find the extent with the same value as the lstart of the first fragment you
plan to drop. You need to migrate every object appearing from this point on
in the output.

6. Migrate these objects onto earlier fragments in the database.

Objids other than 0 or 99 are objects that you must migrate or drop. You
can migrate a user table by building a new clustered index on the table
(since the segmap was changed, the new allocations will not go on this

You can migrate some system tables (but not all) using the sp_fixindex
command to rebuild its clustered index. However, there are a few system
tables that cannot have their clustered indexes rebuilt, and if they have
any allocations on the usage, you are out of luck.

If the objid is 8, then it is the log. You can migrate the log by ensuring
that another usage has a log segment (segmap 4 or 7). Do enough activity on
the database to fill an extents worth of log pages, then checkpoint and
dump tran.

Once you have moved all the objects, delete the row from sysusages and
reboot the server.

Run dbcc checkdb and dbcc checkalloc on the database to be sure you are ok,
then dump the database again.

Return to top


1.2.13: How do I audit the SQL sent to the server?


This does not seem to be well documented, so here is a quick means of auditing
the SQL text that is sent to the server. Note that this simply audits the SQL
sent to the server. So, if your user process executes a big stored procedure,
all you will see here is a call to the stored procedure. None of the SQL that
is executed as part of the stored procedure will be listed.

Firstly, you need to have installed Sybase security (which involves installing
the sybsecurity database and loading it using the script $SYBASE/scripts/
installsecurity). Read the Sybase Security Administration Manual, you may
want to enable a threshold procedure to toggle between a couple of audit
tables. Be warned, that the default configuration option "suspend auditing
when device full" is set to 1. This means that the server will suspend all
normal SQL operations if the audit database becomes full and the sso logs in
and gets rid of some data. You might want to consider changing this to 0
unless yours is a particularly sensitive installation.

Once that is done, you need to enable auditing. If you haven't already, you
will need to restart ASE in order to start the audit subsystem. Then comes the
bit that does not seem well documented, you need to select an appropriate audit
option, and the one for the SQL text is "cmdtext". From the sybsecurity
database, issue

sp_audit "cmdtext",<username>,"all","on"

for each user on the system that wish to collect the SQL for. sp_audit seems
to imply that you can replace "<username>" with all, but I get the error
message "'all' is not a valid user name". Finally, enable auditing for the
system as a whole using

sp_configure "auditing",1

If someone knows where in the manuals this is well documented, I will add a

Note: The stored procedure sp_audit had a different name under previous
releases. I think that it was called sp_auditoption. Also, to get a full list
of the options and their names, go into sybsecurity and simply run sp_audit
with no arguments.

Return to top


1.2.14: sp_helpdb/sp_helpsegment is returning negative numbers


A number of releases of ASE return negative numbers for sp_helpdb. One solution
given by Sybase is to restart the server. Hmm... not always possible. An
alternative is to use the dbcc command 'usedextents'. Issue the following:

dbcc traceon(3604)
dbcc usedextents(, 0, 1, 1)

and the problem should disappear. This is actually a solved case, Sybase solved
case no: 10454336, go to
=10454336 to see more information.

Return to top


Advanced Administration Basic Administration ASE FAQ

David Owen

Jan 21, 2004, 4:26:02 AM1/21/04
Archive-name: databases/sybase-faq/part4

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.
Sybase Frequently Asked Questions

Sybase FAQ Home PageAdaptive Server Enterprise FAQAdaptive Server Anywhere FAQ

Repserver FAQSearch the FAQ

Adaptive Server Enterprise

0. What's in a name?
1. ASE Administration
1.1 Basic Administration
1.2 User Database Administration
1.3 Advanced Administration
1.4 General Troubleshooting
1.5 Performance and Tuning
1.6 Server Monitoring
2. Platform Specific Issues
2.1 Solaris
2.2 NT
2.3 Linux
3. DBCC's
4. isql
5. bcp
6. SQL Development
6.1 SQL Fundamentals
6.2 SQL Advanced
6.3 Useful SQL Tricks
7. Open Client
9. Freeware
10. Sybase Technical News
11. Additional Information
12. Miscellany


What's in a name?

Throughout this FAQ you will find references to SQL Server and, starting with
this release, ASE or Adaptive Server Enterprise to give it its full name. You
might also be a little further confused, since Microsoft also seem to have a
product called SQL Server.

Well, back at about release 4.2 of Sybase SQL Server, the products were exactly
the same. Microsoft were to do the port to NT. Well, it is pretty well
documented, but there was a falling out. Both companies kept the same name for
their data servers and confusion began to reign. In an attempt to try and sort
this out, Sybase renamed their product Adaptive Server Enterprise (ASE)
starting with version 11.5.

I found this quote in a Sybase manual the other day:

Since changing the name of Sybase SQL Server to Adaptive Server Enterprise,
Sybase uses the names Adaptive Server and Adaptive Server Enterprise to refer
collectively to all supported versions of the Sybase SQL Server and Adaptive
Server Enterprise. Version-specific references to Adaptive Server or SQL Server
include version numbers.

I will endeavour to try and do the same within the FAQ, but the job is far from

Back to Top

Basic ASE Administration

1.1.1 What is SQL Server and ASE anyway?
1.1.2 How do I start/stop ASE when the CPU reboots?
1.1.3 How do I move tempdb off of the master device?
1.1.4 How do I correct timeslice -201?
1.1.5 The how's and why's on becoming Certified.
1.1.6 RAID and Sybase
1.1.7 How to swap a db device with another
1.1.8 Server naming and renaming
1.1.9 How do I interpret the tli strings in the interface file?
1.1.10 How can I tell the datetime my Server started?
1.1.11 Raw partitions or regular files?
1.1.12 Is Sybase Y2K (Y2000) compliant?
1.1.13 How can I run the ASE upgrade manually?
1.1.14 We have lost the sa password, what can we do?
1.1.15 How do I set a password to be null?
1.1.16 Does Sybase support Row Level Locking?
1.1.17 What platforms does ASE run on?
1.1.18 How do I backup databases > 64G on ASE prior to 12.x?

User Database Administration # ASE FAQ


1.1.1: What is SQL Server and ASE?



Before Sybase System 10 (as they call it) we had Sybase 4.x. Sybase System 10
has some significant improvements over Sybase 4.x product line. Namely:

* the ability to allocate more memory to the dataserver without degrading its
* the ability to have more than one database engine to take advantage of
multi-processor cpu machines.
* a minimally intrusive process to perform database and transaction dumps.

Background and More Terminology

A ASE (SQL Server) is simply a Unix process. It is also known as the database
engine. It has multiple threads to handle asynchronous I/O and other tasks. The
number of threads spawned is the number of engines (more on this in a second)
times five. This is the current implementation of Sybase System 10, 10.0.1 and
10.0.2 on IRIX 5.3.

Each ASE allocates the following resources from a host machine:

* memory and
* raw partition space.

Each ASE can have up to 255 databases. In most implementations the number of
databases is limited to what seems reasonable based on the load on the ASE.
That is, it would be impractical to house all of a large company's databases
under one ASE because the ASE (a Unix process) will become overloaded.

That's where the DBAs experience comes in with interrogation of the user
community to determine how much activity is going to result on a given database
or databases and from that we determine whether to create a new ASE or to house
the new database under an existing ASE. We do make mistakes (and businesses
grow) and have to move databases from one ASE to another. At times ASEs need to
move from one CPU server to another.

With Sybase System 10, each ASE can be configured to have more than one engine
(each engine is again a Unix process). There's one primary engine that is the
master engine and the rest of the engines are subordinates. They are assigned
tasks by the master.

Interprocess communication among all these engines is accomplished with shared

Some times when a DBA issues a Unix kill command to extinguish a maverick
ASE, the subordinate engines are forgotten. This leaves the shared memory
allocated and eventually we may get in to situations where swapping occurs
because this memory is locked. To find engines that belong to no master
ASE, simple look for engines owned by /etc/init (process id 1). These
engines can be killed -- this is just FYI and is a DBA duty.

Before presenting an example of a ASE, some other topics should be covered.


An ASE has connections to it. A connection can be viewed as a user login but
it's not necessarily so. That is, a client (a user) can spark up multiple
instances of their application and each client establishes its own connection
to the ASE. Some clients may require two or more per invocation. So typically
DBA's are only concerned with the number of connections because the number of
users typically does not provide sufficient information for us to do our job.

Connections take up ASE resources, namely memory, leaving less memory for
the ASEs' available cache.

ASE Buffer Cache

In Sybase 4.0.1 there was a limit to the amount of memory that could be
allocated to a ASE. It was around 80MB, with 40MB being the typical max. This
was due to internal implementations of Sybase's data structures.

With Sybase System 10 there really was no limit. For instance, we had an ASE
cranked up to 300MB under 10. With System 11 and 12 this has been further
extended. ASE's with 4G bytes of memory are not uncommon. I have not heard of
an 11.9.3 or a 12 server with more that 4G bytes, but I am sure that they are
not far away.

The memory in an ASE is primarily used to cache data pages from disk. Consider
that the ASE is a light weight Operating System: handling user (connections),
allocating memory to users, keeping track of which data pages need to be
flushed to disk and the sort. Very sophisticated and complex. Obviously if a
data page is found in memory it's much faster to retrieve than going out to

Each connection takes away a little bit from the available memory that is used
to cache disk pages. Upon startup, the ASE pre-allocates the memory that is
needed for each connection so it's not prudent to configure 500 connections
when only 300 are needed. We'd waste 200 connections and the memory associated
with that. On the other hand, it is also imprudent to under configure the
number of connections; users have a way of soaking up a resource (like an ASE)
and if users have all the connections a DBA cannot get into the server to
allocate more connections.

One of the neat things about an ASE is that it reaches (just like a Unix
process) a working set. That is, upon startup it'll do a lot of physical I/O's
to seed its cache, to get lookup information for typical transactions and the
like. So initially, the first users have heavy hits because their requests have
to be performed as a physical I/O. Subsequent transactions have less physical I
/O and more logical I/O's. Logical I/O is an I/O that is satisfied in the ASEs'
buffer cache. Obviously, this is the preferred condition.


We throw around terms like everyone is supposed to know this high tech lingo.
The problem is that they are two different animals that require a ASE to be
tuned accordingly for each.

Well, here's the low down.

Decision Support System
Online Transaction Processing

What do these mean? OLTP applications are those that have very short orders of
work for each connection: fetch this row and with the results of it update one
or two other rows. Basically, small number of rows affected per transaction in
rapid sucession, with no significant wait times between operations in a

DSS is the lumbering elephant in the database world (unless you do some
tricks... out of this scope). DSS requires a user to comb through gobs of data
to aggregate some values. So the transactions typically involve thousands of
rows. Big difference than OLTP.

We never want to have DSS and OLTP on the same ASE because the nature of OLTP
is to grab things quickly but the nature of DSS is to stick around for a long
time reading tons of information and summarizing the results.

What a DSS application does is flush out the ASE's data page cache because of
the tremendous amount of I/O's. This is obviously very bad for OTLP
applications because the small transactions are now hurt by this trauma. When
it was only OLTP a great percentage of I/O was logical (satisfied in the
cache); now transactions must perform physical I/O.

That's why it's good not to mix DSS and OLTP if at all possible.

If mixing them cannot be avoided, then you need to think carefully about how
you configure your server. Use named data caches to ensure that the very
different natures of OLTP and DSS do not conflict with each other. If you
tables that are shared, consider using dirty reads for the DSS applications if
at all possible, since this will help not to block the OLTP side.

Asynchronous I/O

Why async I/O? The idea is that in a typical online transaction processing
(OLTP) application, you have many connections (over 200 connections) and short
transactions: get this row, update that row. These transactions are typically
spread across different tables of the databases. The ASE can then perform each
one of these asynchronously without having to wait for others to finish. Hence
the importance of having async I/O fixed on our platform.


Sybase System 10 can have more than one engine (as stated above). Sybase has
trace flags to pin the engines to a given CPU processor but we typically don't
do this. It appears that the master engine goes to processor 0 and subsequent
subordinates to the next processor.

Currently, Sybase does not scale linearly. That is, five engines do not make
Sybase perform five times as fast however we do max out with four engines.
After that performance starts to degrade. This is supposed to be fixed with
Sybase System 11.

Putting Everything Together

As previously mentioned, an ASE is a collection of databases with connections
(that are the users) to apply and retrieve information to and from these
containers of information (databases).

The ASE is built and its master device is typically built over a medium sized
(50MB) raw partition. The tempdb is built over a cooked (regular - as opposed
to a raw device) file system to realize any performance gains by buffered
writes. The databases themselves are built over the raw logical devices to
ensure their integrity. (Note: in System 12 you can use the dsync flag to
ensure that writes to file system devices are secure.

Physical and Logical Devices

Sybase likes to live in its own little world. This shields the DBA from the
outside world known as Unix, VMS or NT. However, it needs to have a conduit to
the outside world and this is accomplished via devices.

All physical devices are mapped to logical devices. That is, given a physical
device (such as /lv1/dumps/tempdb_01.efs or /dev/rdsk/dks1ds0) it is mapped by
the DBA to a logical device. Depending on the type of the device, it is
allocated, by the DBA, to the appropriate place (vague enough?).

Okay, let's try and clear this up...

Dump Device

The DBA may decide to create a device for dumping the database nightly. The DBA
needs to create a dump device.

We'll call that logically in the database datadump_for_my_db but we'll map it
to the physical world as /lv1/dumps/in_your_eye.dat So the DBA will write a
script that connects to the ASE and issues a command like this:

dump database my_stinking_db to datadump_for_my_db

and the backupserver (out of this scope) takes the contents of my_stinking_db
and writes it out to the disk file /lv1/dumps/in_your_eye.dat

That's a dump device. The thing is that it's not preallocated. This special
device is simply a window to the operating system.

Data and Log Devices

Ah, now we are getting into the world of pre-allocation. Databases are built
over raw partitions. The reason for this is because Sybase needs to be
guaranteed that all its writes complete successfully. Otherwise, if it posted
to a file system buffer (as in a cooked file system) and the machine crashed,
as far as Sybase is concerned the write was committed. It was not, however, and
integrity of the database was lost. That is why Sybase needs raw partitions.
But back to the matter at hand...

When building a new ASE, the DBA determines how much space they'll need for all
the databases that will be housed in this ASE.

Each production database is composed of data and log.

The data is where the actual information resides. The log is where the changes
are kept. That is, every row that is updated/deleted/inserted gets placed into
the log portion then applied to the data portion of the database.

That's why DBA strives to place the raw devices for logs on separate disks
because everything has to single thread through the log.

A transaction is a collection of SQL statements (insert/delete/update) that are
grouped together to form a single unit of work. Typically they map very closely
to the business.

I'll quote the Sybase ASE Administration Guide on the role of the log:

The transaction log is a write-ahead log. When a user issues a statement
that would modify the database, ASE automatically writes the changes to the
log. After all changes for a statement have been recorded in the log, they
are written to an in-cache copy of the data page. The data page remains in
cache until the memory is needed for another database page. At that time,
it is written to disk. If any statement in a transaction fails to complete,
ASE reverses all changes made by the transaction. ASE writes an "end
transaction" record to the log at the end of each transaction, recording
the status (success or failure) of the transaction

As such, the log will grow as user connections affect changes to the database.
The need arises to then clear out the log of all transactions that have been
flushed to disk. This is performed by issuing the following command:

dump transaction my_stinking_db to logdump_for_my_db

The ASE will write to the dumpdevice all transactions that have been committed
to disk and will delete the entries from its copy, thus freeing up space in the
log. Dumping of the transaction logs is accomplished via cron (the Unix
scheduler, NT users would have to resort to at or some third party tool) . We
schedule the heavily hit databases every 20 minutes during peak times.

A single user can fill up the log by having begin transaction with no
corresponding commit/rollback transaction. This is because all their
changes are being applied to the log as an open-ended transaction, which is
never closed. This open-ended transaction cannot be flushed from the log,
and therefore grows until it occupies all of the free space on the log

And the way we dump it is with a dump device. :-)

An Example

If the DBA has four databases to plop on this ASE and they need a total of
800MB of data and 100MB of log (because that's what really matters to us), then
they'd probably do something like this:

1. allocate sufficient raw devices to cover the data portion of all the
2. allocate sufficient raw devices to cover the log portion of all the
3. start allocating the databases to the devices.

For example, assuming the following database requirements:

| | | |
| DB | Data | Log |
| a | 300 | 30 |
| b | 400 | 40 |
| c | 100 | 10 |

and the following devices:

| Logical | Physical | Size |
| | /dev/ | |
| dks3d1s2_data | rdsk/ | 500 |
| | dks3d1s2 | |
| | /dev/ | |
| dks4d1s2_data | rdsk/ | 500 |
| | dks4d1s2 | |
| | /dev/ | |
| dks5d1s0_log | rdsk/ | 200 |
| | dks5d1s0 | |

then the DBA may elect to create the databases as follows:

create database a on dks3d1s2_data = 300 log on dks5d1s0_log = 30
create database b on dks4d1s2_data = 400 log on dks5d1s0_log = 40
create database c on dks3d1s2_data = 50, dks4d1s2_data = 50 log on dks5d1s0_log = 10

Some of the devices will have extra space available because out database
allocations didn't use up all the space. That's fine because it can be used for
future growth. While the Sybase ASE is running, no other Sybase ASE can
re-allocate these physical devices.


TempDB is simply a scratch pad database. It gets recreated when a SQL Server is
rebooted. The information held in this database is temporary data. A query may
build a temporary table to assist it; the Sybase optimizer may decide to create
a temporary table to assist itself.

Since this is an area of constant activity we create this database over a
cooked file system which has historically proven to have better performance
than raw - due to the buffered writes provided by the Operating System.

Port Numbers

When creating a new ASE, we allocate a port to it (currently, DBA reserves
ports 1500 through 1899 for its use). We then map a host name to the different
ports: hera, fddi-hera and so forth. We can actually have more than one port
number for an ASE but we typically don't do this.

Back to top


1.1.2: How to start/stop ASE when CPU reboots


Below is an example of the various files (on Irix) that are needed to start/
stop an ASE. The information can easily be extended to any UNIX platform.

The idea is to allow as much flexibility to the two classes of administrators
who manage the machine:

* The System Administrator
* The Database Administrator

Any errors introduced by the DBA will not interfere with the System
Administrator's job.

With that in mind we have the system startup/shutdown file /etc/init.d/sybase
invoking a script defined by the DBA: /usr/sybase/sys.config/


On some operating systems this file must be linked to a corresponding entry in
/etc/rc.0 and /etc/rc.2 -- see rc0(1M) and rc2(1M)

# last modified: 10/17/95, sr.
# Make symbolic links so this file will be called during system stop/start.
# ln -s /etc/init.d/sybase /etc/rc0.d/K19sybase
# ln -s /etc/init.d/sybase /etc/rc2.d/S99sybase
# chkconfig -f sybase on

# Sybase System-wide configuration files

if $IS_ON verbose ; then # For a verbose startup and shutdown
else # For a quiet startup and shutdown

case "$1" in
if $IS_ON sybase; then
if [ -x $CONFIG/start.sybase ]; then
$ECHO "starting Sybase servers"
/bin/su - sybase -c "$CONFIG/start.sybase $VERBOSE &"
<error condition>

if $IS_ON sybase; then
if [ -x $CONFIG/stop.sybase ]; then
$ECHO "stopping Sybase servers"
/bin/su - sybase -c "$CONFIG/stop.sybase $VERBOSE &"
<error condition>

echo "usage: $0 {start|stop}"



#!/bin/sh -a
# Script to start sybase
# NOTE: different versions of sybase exist under /usr/sybase/{version}
# Determine if we need to spew our output
if [ "$1" != "spew" ] ; then
OUTPUT=">/dev/null 2>&1"
# 10.0.2 servers
cd $HOME
# Start the backup server
eval install/startserver -f install/RUN_BU_KEPLER_1002_52_01 $OUTPUT
# Start the dataservers
# Wait two seconds between starts to minimize trauma to CPU server
eval install/startserver -f install/RUN_FAC_WWOPR $OUTPUT
sleep 2
eval install/startserver -f install/RUN_MAG_LOAD $OUTPUT
exit 0


# Script to stop sybase
# Determine if we need to spew our output
if [ -z "$1" ] ; then
OUTPUT=">/dev/null 2>&1"
eval killall -15 $OUTPUT dataserver backupserver sybmultbuf
sleep 2
# if they didn't die, kill 'em now...
eval killall -9 $OUTPUT dataserver backupserver sybmultbuf
exit 0

If your platform doesn't support killall, it can easily be simulated as

# Simple killall simulation...
# $1 = signal
# $2 = process_name
# no error checking but assume first parameter is signal...
# what ya want for free? :-)
kill -$1 `ps -ef | fgrep $2 | fgrep -v fgrep | awk '{ print $1 }'`

Back to top


1.1.3: How do I move tempdb off of the Master Device?


There used to be a section in the FAQ describing how to drop all of tempdb's
devices physically from the master device. This can make recovery of the
server impossible in case of a serious error and so it strongly recommended
that you do not do this but simply drop the segments as outlined below.

Sybase TS Preferred Method of Moving tempdb off the Master Device.

This is the Sybase TS method of removing most activity from the master device:

1. Alter tempdb on another device:
1> alter database tempdb on ...
2> go
2. Use the tempdb:
1> use tempdb
2> go
3. Drop the segments:
1> sp_dropsegment "default", tempdb, master
2> go
1> sp_dropsegment "logsegment", tempdb, master
2> go
1> sp_dropsegment "system", tempdb, master
2> go

Note that there is still some activity on the master device. On a three
connection test that I ran:

while ( 1 = 1 )
create table #x (col_a int)
drop table #x

there was one write per second. Not bad.

An Alternative

(I recently did some bench marks comparing this method, the previous method
and a combination of both. According to sp_sysmon there was no difference
in activity at all. I leave it here just in case it proves useful to

The idea of this handy script is to simply fill the first 2MB of tempdb thus
effectively blocking anyone else from using it. The slight gotcha with this
script, since we're using model, is that all subsequent database creates will
also have tempdb_filler installed. This is easily remedied by dropping the
table after creating a new database.

This script works because tempdb is rebuilt every time the ASE is rebooted.
Very nice trick!

/* this isql script creates a table in the model database. */
/* Since tempdb is created from the model database when the */
/* server is started, this effectively moves the active */
/* portion of tempdb off of the master device. */

use model

/* note: 2k row size */
create table tempdb_filler(
a char(255) not null,
b char(255) not null,
c char(255) not null,
d char(255) not null,
e char(255) not null

/* insert 1024 rows */
declare @i int
select @i = 1
while (@i <= 1024)
insert into tempdb_filler values('a','b','c','d','e')
if (@i % 100 = 0) /* dump the transaction every 100 rows */
dump tran model with truncate_only
select @i=@i+1

Back to top


1.1.4: How do I correct timeslice -201


(Note, this procedure is only really necessary with pre-11.x systems. In
system 11 systems, these parameters are tunable using sp_configure.)

Why Increase It?

Basically, it will allow a task to be scheduled onto the CPU for a longer time.
Each task on the system is scheduled onto the CPU for a fixed period of time,
called the timeslice, during which it does some work, which is resumed when its
next turn comes around.

The process has up until the value of ctimemax (a config block variable) to
finish its task. As the task is working away, the scheduler counts down
ctimemax units. When it gets to the value of ctimemax - 1, if it gets stuck and
for some reason cannot be taken off the CPU, then a timeslice error gets
generated and the process gets infected.

On the other hand, ASE will allow a server process to run as long as it needs
to. It will not swap the process out for another process to run. The process
will decide when it is "done" with the server CPU. If, however, a process goes
on and on and never relinquishes the server CPU, then Server will timeslice the

Potential Fix

1. Shutdown the ASE
2. %buildmaster -dyour_device -yctimemax=2000
3. Restart your ASE. If the problem persists contact Sybase Technical Support
notifying them what you have done already.

Back to top


1.1.5: Certified Sybase Professional


There have been changes in the process of becoming a Sybase Certified
Professional. There's a very informative link at
education/profcert, Professional Certification.

Rob Verschoor has put together some good stuff on his pages ( http:// that have pretty much all that you need
to know. He also has a quiz which is intended to test each and everyone's
knowledge of ASE and RepServer.

Sybase have released some sample questions (look for them at http:// The GUI requires MS Windows (at the time of
writing), but they are definitely a sample of what you will be asked. There are
also a couple of CDs available with yet more questions on them.

The Certification Kickback

There have been a couple of articles recently covering the kickback that seems
to be happening as far as certification is concerned. Serveral HR people have
said that if a person's CV (resume) is sent in covered in certifications then
it goes straight into the bit bucket. I do not know if this is true or not, but
one thing that you might wish to consider is the preparation of two CVs, one
with certifications, one without. If the job request specifies certification is
necessary, then send in the appropriate CV. If it does not specifiy
certification, send in the clean version. If you go into the interview for a
job that did not specify certifications up front and the interviewer starts
going about you not being certificated, you simply produce your card as proof.


1.1.6: RAID and Sybase


Here's a short summary of what you need to know about Sybase and RAID.

The newsgroup has a detailed FAQ on RAID, but here are a few


RAID means several things at once. It provides increased performance through
disk striping, and/or resistance to hardware failure through either mirroring
(fast) or parity (slower but cheaper).


RAID 0 is just striping. It allows you to read and write quickly, but provides
no protection against failure.


RAID 1 is just mirroring. It protects you against failure, and generally reads
and writes as fast as a normal disk. It uses twice as many disks as normal (and
sends twice as much data across your SCSI bus, but most machines have plenty of
extra capacity on their SCSI busses.)

Sybase mirroring always reads from the primary copy, so it does not
increase read performance.

RAID 0+1

RAID 0+1 (also called RAID 10) is striping and mirroring together. This gives
you the highest read and write performance of any of the raid options, but uses
twice as many disks as normal.


RAID 4 and 5 have disk striping and use 1 extra disk to provide parity. Various
vendors have various optimizations, but this RAID level is generally much
slower at writes than any other kind of RAID.


I am not sure if this is a genuine RAID standard, further checking on your part
is required.


Most hardware RAID controllers also provide a battery-backed RAM cache for
writing. This is very useful, because it allows the disk to claim that the
write succeeded before it has done anything. If there is a power failure, the
information will (hopefully) be written to disk when the power is restored. The
cache is very important because database log writes cause the process doing the
writes to stop until the write is successful. Systems with write caching thus
complete transactions much more quickly than systems without.

What RAID levels should my data, log, etc be on? Well, the log disk is
frequently written, so it should not be on RAID 4 or 5. If your data is
infrequently written, you could use RAID 4 or 5 for it, because you don't mind
that writes are slow. If your data is frequently written, you should use RAID
0+1 for it. Striping your data is a very effective way of avoiding any one disk
becoming a hot-spot. Traditionally Sybase databases were divided among devices
by a human attempting to determine where the hot-spots are. Striping does this
in a straight-forward fashion, and also continues to work if your data access
patterns change.

Your tempdb is data but it is frequently written, so it should not be on RAID 4
or 5.

If your RAID controller does not allow you to create several different kinds of
RAID volumes on it, then your only hope is to create a huge RAID 0+1 set. If
your RAID controller does not support RAID 0+1, you shouldn't be using it for
database work.

Back to top


1.1.7: How to swap a db device with another


Here are four approaches. Before attempting any of the following: Backup,
Backup, Backup.

Dump and Restore

1. Backup the databases on the device, drop the databases, drop the devices.
2. Rebuild the new devices.
3. Rebuild the databases (Make sure you recreate the fragments correctly - See
Ed Barlow's scripts ( for an sp that
helps you do this if you've lost your notes. Failure to do this will
possibly lead to data on log segments and log on data segments).
4. Reload the database dumps!

Twiddle the Data Dictionary - for brave experts only.

1. Shut down the server.
2. Do a physical dump (using dd(1), or such utility) of the device to be
3. Load the dump to the new device
4. Edit the data dictionary (sysdevices.physname) to point to the new device.

The Mirror Trick

1. Create a mirror of the old device, on the new device.
2. Unmirror the primary device, thereby making the _backup_ the primary
3. Repeat this for all devices until the old disk is free.

dd (Unix only)

(This option is no use if you need to move a device now, rather if you
anticipate moving a device at some point in the future.)

You may want to use this approach for creating any database.

Create (or use) a directory for symbolic links to the devices you wish to use.
Then create your database, but instead of going to /dev/device, go to /
directory/symlink - When it comes time to move your devices, you shut down the
server, simply dd(1) the data from the old device to the new device, recreate
the symbolic links to the new device and restart the ASE. Simple as that.

Backups are a requisite in all cases, just in case.

Back to top


1.1.8: Server naming and renaming


There are three totally separate places where ASE names reside, causing much

ASE Host Machine interfaces File

A master entry in here for server TEST will provide the network information
that the server is expected to listen on. The -S parameter to the dataserver
executable tells the server which entry to look for, so in the RUN_TEST file,
-STEST will tell the dataserver to look for the entry under TEST in the
interfaces file and listen on any network parameters specified by 'master'

master tcp ether hpsrv1 1200
query tcp ether hpsrv1 1200

Note that preceding the master/query entries there's a tab.

This is as far as the name TEST is used. Without further configuration the
server does not know its name is TEST, nor do any client applications.
Typically there will also be query entries under TEST in the local interfaces
file, and client programs running on the same machine as the server will pick
this connection information up. However, there is nothing to stop the query
entry being duplicated under another name entirely in the same interfaces file.

query tcp ether hpsrv1 1200

isql -STEST or isql -SARTHUR will connect to the same server. The name is
simply a search parameter into the interfaces file.

Client Machine interfaces File

Again, as the server name specified to the client is simply a search parameter
for Open Client into the interfaces file, SQL.INI or WIN.INI the name is
largely irrelevant. It is often set to something that means something to the
users, especially where they might have a choice of servers to connect to. Also
multiple query entries can be set to point to the same server, possibly using
different network protocols. eg. if TEST has the following master entries on
the host machine:

master tli spx /dev/nspx/ \xC12082580000000000012110
master tcp ether hpsrv1 1200

Then the client can have a meaningful name:

query tcp ether hpsrv1 1200

or alternative protocols:

query tcp ether hpsrv1 1200
query tli spx /dev/nspx/ \xC12082580000000000012110


This system table holds information about remote ASEs that you might want to
connect to, and also provides a method of naming the local server.

Entries are added using the sp_addserver system procedure - add a remote server
with this format:

sp_addserver server_name, null, network_name

server_name is any name you wish to refer to a remote server by, but
network_name must be the name of the remote server as referenced in the
interfaces file local to your local server. It normally makes sense to make the
server_name the same as the network_name, but you can easily do:

sp_addserver LIVE, null, ACCTS_LIVE

When you execute for example, exec LIVE.master..sp_helpdb the local ASE will
translate LIVE to ACCTS_LIVE and try and talk to ACCTS_LIVE via the ACCTS_LIVE
entry in the local interfaces file.

Finally, a variation on the sp_addserver command:

sp_addserver LOCALSRVNAME, local

names the local server (after a restart). This is the name the server reports
in the errorlog at startup, the value returned by @@SERVERNAME, and the value
placed in Open Client server messages. It can be completely different from the
names in RUN_SRVNAME or in local or remote interfaces - it has no bearing on
connectivity matters.

Back to top


1.1.9: How do I interpret the tli strings in the interface file?


The tli string contained with Solaris interface files is a hex string
containing port and IP address. If you have an entry

master tli tcp /dev/tcp \x000204018196c4510000000000000000

Then it can be interpreted as follows:

x0002 no user interpretation (header info?)
0401 port number (1025 decimal)
81 first part of IP address (129 decimal)
96 second part of IP address (150 decimal)
c4 third part of IP address (196 decimal)
51 fourth part of IP address (81 decimal)

So, the above tli address is equivalent to

master tcp ether sybhost 1025

where sybhost's IP address is

The following piece of Sybperl (courtesy of Michael Peppler) takes a tli entry
and returns the IP address and port number for each server in a Solaris'
interfaces file.

#!/usr/local/bin/perl -w

use strict;

my $server;
my @dat;
my ($port, $ip);

while(<>) {
next if /^\s*$/;
next if /^\s*\#/;
if(/^\w/) {
$server = $_;
$server =~ s/\s*$//;

@dat = split(' ', $_);
($port, $ip) = parseAddress($dat[4]);
print "$server - $dat[0] on port $port, host $ip\n";

sub parseAddress {
my $addr = shift;

my $port;
my $ip;

my (@arr) = (hex(substr($addr, 10, 2)),

hex(substr($addr, 12, 2)),

hex(substr($addr, 14, 2)),

hex(substr($addr, 16, 2)));
$port = hex(substr($addr, 6, 4));
$ip = join('.', @arr);

($port, $ip);

Back to top


1.1.10: How can I tell the datetime my Server started?


Method #1

The normal way would be to look at the errorlog, but this is not always
convenient or even possible. From a SQL session you find out the server startup
time to within a few seconds using:

select "Server Start Time" = crdate
from master..sysdatabases
where name = "tempdb"

Method #2

Another useful query is:

select * from sysengines

which gives the address and port number at which the server is listening.

Back to top


1.1.11: Raw partitions or regular files?


Hmmm... as always, this answer depends on the vendor's implementation on a
cooked file system for the ASE...

Performance Hit (synchronous vs asynchronous)

If on this platform, the ASE performs file system I/O synchronously then the
ASE is blocked on the read/write and throughput is decreased tremendously.

The way the ASE typically works is that it will issue an I/O (read/write) and
save the I/O control block and continue to do other work (on behalf of other
connections). It'll periodically poll the workq's (network, I/O) and resume
connections when their work has completed (I/O completed, network data

Performance Hit (bcopy issue)

Assuming that the file system I/O is asynchronous (this can be done on SGI), a
performance hit may be realized when bcopy'ing the data from kernel space to
user space.

Cooked I/O typically (again, SGI has something called directed I/O which allows
I/O to go directly to user space) has to go from disk, to kernel buffers and
from kernel buffers to user space; on a read. The extra layer with the kernel
buffers is inherently slow. The data is moved from kernel buffers to/from user
space using bcopy(). On small operations this typically isn't that much of an
issue but in a RDBMS scenario the bcopy() layer is a significant performance
hit because it's done so often...

Performance Gain!

It's true, using file systems, at times you can get performance gains assuming
that the ASE on your platform does the I/O asynchronously (although there's a
caveat on this too... I'll cover that later on).

If your machine has sufficient memory and extra CPU capacity, you can realize
some gains by having writes return immediately because they're posted to
memory. Reads will gain from the anticipatory fetch algorithm employed by most

You'll need extra memory to house the kernel buffered data and you'll need
extra CPU capacity to allow bdflush() to write the dirty data out to disk...
eventually... but with everything there's a cost: extra memory and free CPU

One argument is that instead of giving the O/S the extra memory (by leaving it
free) to give it to the ASE and let it do its caching... but that's a different

Data Integrity and Cooked File System

If the Sybase ASE is not certified to be used over a cooked file system,
because of the nature of the kernel buffering (see the section above) you may
face database corruption by using cooked file system anyway. The ASE thinks
that it has posted its changes out to disk but in reality it has gone only to
memory. If the machine halts without bdflush() having a chance to flush memory
out to disk, your database may become corrupted.

Some O/S's allow cooked files to have a write through mode and it really
depends if the ASE has been certified on cooked file systems. If it has, it
means that when the ASE opens a device which is on a file system, it fcntl()'s
the device to write-through.

When to use cooked file system?

I typically build my tempdb on cooked file system and I don't worry about data
integrity because tempdb is rebuilt every time your ASE/SQL Server is rebooted.

Back to top


1.1.12: Is Sybase Y2K (Y2000) compliant?


Sybase is year 2000 compliant at specific revisions of each product. Full
details are available at, specifically (as these links
will undoubtedly change):

Note: Since we have made it to 2000 more or less intact, I see no reason to
include this question. I plan to remove with the next release of the FAQ. If
you feel strongly about leaving it in then let me know.

Back to top


1.1.13 How Can I Run the ASE Upgrade Manually?


How to Run the ASE Upgrade Manually

This document describes the steps required to perform a manual upgrade for ASE
from release 4.x or 10.0x to release 11.02. In most cases, however, you should
use sybinit to perform the upgrade.


1. Use release 11.0x sybinit to run the pre-eligibility test and Check
Reserved words. Make any necessary changes that are mentioned in the
sybinit log. The sybinit log is located in $SYBASE/init/logs/logxxxx.yyy.
2. Use isql to connect to the 4.x or 10.0x ASE and do the following tasks:
a. Turn on option to allow updates to system tables:
1> sp_configure "allow updates", 1
2> go

b. Checkpoint all databases:
1> use "dbname"
2> go
1> checkpoint
2> go

c. Shutdown the 4.x or 10.0x ASE.
1> shutdown
2> go
3. Copy the interfaces file to the release 11.0x directory.
4. Set the environment variable SYBASE to the release 11.0x directory.
5. Copy the runserver file to the release 11.0x $SYBASE/install directory.
6. Edit the $SYBASE/install/RUN_SYBASE (runserver file) to change the path
from the 4.x or 10.x dataserver directory to the new release 11.0x
7. Start ASE using the new runserver file.
% startserver -f$SYBASE/install/RUN_SYBASE
8. Run the upgrade program:

UNIX: $SYBASE/upgrade/upgrade -S"servername" -P"sapassword" > $SYBASE/init/
logs/mylog.log 2>&1 VMS: SYBASE_SYSTEM[SYBASE.UPGRADE]upgrade /password=
"sa_password" /servername="servername"

9. Shut down SQL server after a successful upgrade.
% isql -Usa -Pxxx
1> shutdown
2> go
10. Start ASE using the release 11.0x runserver file.

% startserver -f$SYBASE/install/RUN_SYBASE

11. Create the sybsystemprocs device and database if upgrading from 4.9.x. You
should create a 21mb sybsystemprocs device and database.
a. Use the disk init command to create the sybsytemprocs device and
database manually, for example:

disk init name = "sybprocsdev", physname="/dev/sybase/rel1102/
sybsystemprocs.dat", vdevno=4, size=10752 go To check to see which vdevno
is available: type 1> select distinct low/16777216 from sysdevices 2> order
by low 3> go A sample create database command: create database
sybsystemprocs on sybprocsdev=21 go Please refer to the "Sybase ASE
Reference Manual", for more information on these commands.

12. Run the installmaster and installmodel scripts:
UNIX: %isql -Usa -Psapassword -i$SYBASE/scripts/installmaster
UNIX: %isql -Usa -Psapassword -i$SYBASE/scripts/installmodel
VMS: $isql /user="sa" /password="sapass"
/input="[sybase_system.scripts]installm aster"
VMS: $isql /user="sa" /password="sapass"
/input="[sybase_system.scripts]installm odel"
13. If you upgraded from ASE 4.9.2, you will need to run sp_remap to remap the
compiled objects. Sp_remap remaps stored procedures, triggers, rules,
defaults, or views to be compatible with the current release of ASE. Please
refer to the Reference Manual Volume II for more information on the
sp_remap command.

The syntax for sp_remap:

sp_remap object_name

If you are upgrading to ASE 11.0.x and the upgrade process failed when using
sybinit, you can invoke sybinit and choose remap query tress from the upgrade
menu screen. This is a new option that is added, after a failed upgrade.

Back to top


1.1.14 We have lost the sa password, what can we do?


Remember Douglas Adams famous quote "Don't panic" is the first thing!

I know that most people use the 'sa' account all of the time, which is fine if
there is only ever one dba administering the system. If you have more than one
person accessing the server using the 'sa' account, consider using sa_role
enabled accounts and disabling the 'sa' account. Funnily enough, this is
obviously what Sybase think because it is one of the questions in the
certification exams.

If you see that someone is logged using the 'sa' account or is using an account
with 'sa_role' enabled, then you can do the following:

sp_configure "allow updates to system tables",1
update syslogins set password=null where name = 'sa'
sp_password null,newPassword

You must rememeber to reset the password before exiting isql or sqsh. I thought
that setting it to null would be enough, and exited isql thinking that I would
be able to get in with a null password. Take it from me that the risk is not
worth it. It failed for me and I had to kill the dataserver and get a new
password. I just tried the above method and it works fine.

If you have a user with sso_role enabled, login with that account and change
the 'sa' password that way. It is often a good idea to have a separate site
security officer, just to get you out of this sticky situation. Certainly stops
you looking an idiot in managements eyes for having to reboot production
because you have locked yourself out!

OK, so we have got to the point where there are no accounts with sufficient
priviledges to allow you to change the 'sa' account password. (You are sure
about that, since the next part can cause data loss, so have another quick
look.) We now need to some more drastic stuff.

If the server is actually running, then you need to stop it. We know that the
only accounts that can stop the server in a nice manner are not available, so
it has to be some sort of kill. You can try:



kill -15

(they are identical) which is designed to be caught by ASE, which then performs
the equivalent of shutdown with nowait. If ASE does not die, and you should
give it a little while to catch and act on the signal, then you might have to
try other measures, which is probably kill -9. Note that if you have tables
with identity columns, most of these will jump alarmingly, unless you are using
ASE 12.5 and the identity interval is set to 1.

Once down, edit the RUN_SERVER file ( RUN_SERVER.bat on NT) and add "-psa" (it
is important not to leave a space between the"-p" and the "sa", and that it is
all lower-case) to the end of the dataserver or sqlsrvr.exe line. You will end
up with a file that looks a bit like:

# Adaptive Server name: N_UTSIRE
# Master device path: /data/sybase/databases/N_UTSIRE/master.dat
# Error log path: /opt/sybase-11.9.2/install/N_UTSIRE.log
# Directory for shared memory files: /opt/sybase-11.9.2
# Regenerate sa password -psa
/opt/sybase-11.9.2/bin/dataserver \
-d/data/sybase/databases/N_UTSIRE/master.dat \
-e/opt/sybase-11.9.2/install/N_UTSIRE.log \
-M/opt/sybase-11.9.2 -psa \

(I add the line mentioning the regenerate, so that if I need to do this in a
moment of extreme pressure it is there in front of my nose.

Now, start the server again and you should see the following on the screen:

00:00000:00001:2001/05/26 18:29:21.39 server 'bin_iso_1' (ID = 50)
00:00000:00001:2001/05/26 18:29:21.39 server on top of default character set:
00:00000:00001:2001/05/26 18:29:21.39 server 'iso_1' (ID = 1).
00:00000:00001:2001/05/26 18:29:21.39 server Loaded default Unilib conversion handle.

New SSO password for sa:tmfyrkdwpibung

Note that it is not written to the log file, so keep your eyes peeled.

On NT you will have to start the server from the command line and not use
Sybase Central or the control panel.

Obviously, you will want to change the password to something much more
memorable as soon as possible.

Remember to remove the "-psa" from the "RUN" file before you start the server
again or else the password will be changed again for you.

Back to top


1.1.15 How do I set a password to be null?


Since ASE 11 (I cannot remember if it was with the very first release of 11,
but certainly not before) the password column in syslogins has been encrypted.
Setting this column to NULL does not equate to that login having a NULL
password. A NULL password still requires the correct binary string to be in

In release 12 and above, set the minimum password length to be 0 using
sp_configure and give that account a null password, and all should be fine.

Before 12, it is not possible to set the minimum password length, so the direct
approach is not possible. So, update the relevant record in syslogins setting
the password column to be the same as that of an account with a NULL password

How does one get the correct binary value? When a new ASE is built, the 'sa'
account has a NULL password to start with. Setting an account to have the same
binary value as such an 'sa' account should work. Remember that the binary
string is going to be specific to the operating system and the exact release of
ASE etc. Obviously, if you have set the password of your 'sa' accounts to be
something other than NULL (sensible move), then you are going to have to build
yourself a dummy server just to get the correct string. If this is important to
you, then you may wish to store the value somewhere safe once you have
generated it.

Yet another method would be to simply insert the correct hex string into the
password column. Rob Verschoor has a very nice stored proc on his site called
sp_blank_password to allow you to do just this. Go to
blankpwd.html .

Back to top


1.1.16: Does Sybase support Row Level Locking?


With Adaptive Server Enterprise 11.9 Sybase introduced row level locking into
its product. In fact it went further than that, it introduced 3 different
locking levels:

* All Pages Locking

This is the scheme that is implemented in all servers prior to 11.9. Here
locks are taken out at the page level, which may included many rows. The
name refers to the fact that all of the pages in any data manipulation
statement are locked, both data and index.

* Data Page Locking

The other two locking schemes are bundled together under the title Data
Page Locking, refering to the fact that only data pages are ever locked in
the conventional sense. Data Page Locking is divided into two categories
+ Data Only Locking

This locking scheme still locks a page at a time, including all of the
rows contained within that page, but uses a new mechanism, called
latches, to lock index pages for the shortest amount of time. One of
the consequences of this scheme is that it does not update index
pages. In order to support this Sybase has introduced a new concept,
forwarded rows. These are rows that have had to move because they have
grown beyond space allowed for them on the page they were created. 2002
bytes per page.

+ Row Level Locking

Just as it sounds, the lock manager only locks the row involved in the

Back to top


1.1.17: What platforms does ASE run on?


Sybase has an excellent lookup page that tells you all of the releases that
Sybase has certifies as running on a particular platform. Got to http:// .

Back to top


1.1.18: How do I backup databases > 64G on ASE prior to 12.x?


As you are all well aware, prior to version of ASE 12, dumping large databases
was a real pain. Tape was the only option for anything greater than 64 gig.
This was because only 32 dump devices, or stripes, were supported, and since
file based stripes were restricted to no more than 2 gig, the total amount of
data that could be dumped was <= 32 * 2 = 64G.

With the introduction of ASE 12, the number of stripes was increased

Back to top


User Database Administration # ASE FAQ

David Owen

Jan 21, 2004, 4:26:04 AM1/21/04
Archive-name: databases/sybase-faq/part9

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

1.5.7: How much memory to configure?


System 10 and below.


At some point you'll wonder if your ASE has been configured with sufficient
memory. We hope that it's not during some crisis but that's probably when it'll

The most important thing in setting up memory for a ASE is that it has to be
large enough to accommodate:

* concurrent user connections
* active procedures
* and concurrent open databases.

By not setting the ASE up correctly it will affect the performance of it. A
delicate balance needs to be struck where your ASE is large enough to
accommodate the users but not too large where it adversely affects the CPU
Server (such as causing swapping).

Assumptions made of the reader:

* The reader has some experience administering ASEs.
* All queries have been tuned and that there are no unnecessary table scans.


As the ASE starts up, it pre-allocates its structures to support the
configuration. The memory that remains after the pre-allocation phase is the
available cache.

The available cache is partitioned into two pieces:

1. buffer cache - data pages to be sent to a user connection or flushed to
2. procedure cache - where query plans live.

The idea is to determine if the buffer cache and the procedure cache are of
adequate size. As a DBA you can use dbcc memusage to ascertain this.

The information provided from a dbcc memusage, daunting at first, but taken in
sections, is easy to understand and provides the DBA with the vital information
that is necessary to determine if more memory is required and where it is

If the procedure cache is too small, user connections will get sporadic 701's:

There is insufficient system memory to run this query.

If the buffer cache is too small, response time may be poor or spiky.

The following text describes how to interpret the output of dbcc memusage and
to correlate this back to the fundamental question:

Does my ASE have enough memory?


Before delving into the world of dbcc memusage some definitions to get us

Buffer Cache (also referred to as the Data Cache)
Area of memory where ASE stores the most recently used data pages and index
pages in 2K page units. If ASE finds a data page or index page in the
buffer cache, it doesn't need to perform a physical I/O (it is reported as
a logical I/O). If a user connection selects data from a database, the ASE
loads the 2K data page(s) here and then hands the information off to the
user connection. If a user connection updates data, these pages are
altered, and then they are flushed out to disk by the ASE.

This is a bit simplistic but it'll do. Read on for more info though.

The cache is maintained as a doubly linked list. The head of the list
is where the most recently used pages are placed. Naturally towards the
tail of the chain are the least recently used pages. If a page is
requested and it is found on the chain, it is moved back to the front
of the chain and the information is relayed, thus saving a physical I/

But wait! this recycling is not done forever. When a checkpoint occurs
any dirty pages are flushed. Also, the parameter cbufwashsize
determines how many times a page containing data can be recycled before
it has to be flushed out to disk. For OAM and index pages the following
parameters apply coamtrips and cindextrips respectively.

Procedure Cache
Area of memory where ASE stores the most recently used query plans of
stored procedures and triggers. This procedure cache is also used by the
Server when a procedure is being created and when a query is being
compiled. Just like the buffer cache, if SQL Server finds a procedure or a
compilation already in this cache, it doesn't need to read it from the

The size of procedure cache is determined by the percentage of remaining
memory configured for this Server parameter after ASE memory needs are met.

Available Cache

When the ASE starts up it pre-allocates its data structures to support the
current configuration. For example, based on the number of user connections,
additional netmem, open databases and so forth the dataserver pre-allocates how
much memory it requires to support these configured items.

What remains after the pre-allocation is the available cache. The available
cache is divided into buffer cache and procedure cache. The sp_configure
"procedure cache" parameter determines the percentage breakdown. A value of 20
would read as follows:

20% of the available cache is dedicated to the procedure cache and 80% is
dedicated to the buffer cache.

Your pal: dbcc memusage

dbcc memusage takes a snapshot of your ASE's current memory usage and reports
this vital information back to you. The information returned provides
information regarding the use of your procedure cache and how much of the
buffer cache you are currently using.

An important piece of information is the size of the largest query plan. We'll
talk about that more below.

It is best to run dbcc memusage after your ASE has reached a working set. For
example, at the end of the day or during lunch time.

Running dbcc memusage will freeze the dataserver while it does its work.
The more memory you have configured for the ASE the longer it'll take. Our
experience is that for a ASE with 300MB it'll take about four minutes to
execute. During this time, nothing else will execute: no user queries, no

In order to run dbcc memusage you must have sa privileges. Here's a sample
execution for discussion purposes:

1> /* send the output to the screen instead of errorlog */
2> dbcc traceon(3604)
3> go
1> dbcc memusage
2> go
Memory Usage:

Meg. 2K Blks Bytes

Configured Memory:300.0000 153600 314572800

Code size: 2.6375 1351 2765600
Kernel Structures: 77.6262 39745 81396975
Server Structures: 54.4032 27855 57045920
Page Cache:129.5992 66355 135894640
Proc Buffers: 1.1571 593 1213340
Proc Headers: 25.0840 12843 26302464

Number of page buffers: 63856
Number of proc buffers: 15964

Buffer Cache, Top 20:

DB Id Object Id Index Id 2K Buffers

6 927446498 0 9424
6 507969006 0 7799
6 959446612 0 7563
6 116351649 0 7428
6 2135014687 5 2972
6 607445358 0 2780
6 507969006 2 2334
6 2135014687 0 2047
6 506589013 0 1766
6 1022066847 0 1160
6 116351649 255 987
6 927446498 8 897
6 927446498 10 733
6 959446612 7 722
6 506589013 1 687
6 971918604 0 686
6 116351649 6 387

Procedure Cache, Top 20:

Database Id: 6
Object Id: 1652357121
Object Name: lp_cm_case_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 16
Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages
Database Id: 6
Object Id: 1668357178
Object Name: lp_cm_subcase_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 10
Size of plans: 0.202827 Mb, 212680.000000 bytes, 110 pages
Database Id: 6
Object Id: 132351706
Object Name: csp_get_case
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 9
Size of plans: 0.149792 Mb, 157068.000000 bytes, 81 pages
Database Id: 6
Object Id: 1858261845
Object Name: lp_get_last_caller_new
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 2
Size of plans: 0.054710 Mb, 57368.000000 bytes, 30 pages

1> /* redirect output back to the errorlog */
2> dbcc traceoff(3604)
3> go

Dissecting memusage output

The output may appear overwhelming but it's actually pretty easy to parse.
Let's look at each section.

Memory Usage

This section provides a breakdown of the memory configured for the ASE.

Memory Usage:

Meg. 2K Blks Bytes

Configured Memory:300.0000 153600 314572800

Code size: 2.6375 1351 2765600
Kernel Structures: 77.6262 39745 81396975
Server Structures: 54.4032 27855 57045920
Page Cache:129.5992 66355 135894640
Proc Buffers: 1.1571 593 1213340
Proc Headers: 25.0840 12843 26302464

Number of page buffers: 63856
Number of proc buffers: 15964

The Configured Memory does not equal the sum of the individual components.
It does in the sybooks example but in practice it doesn't always. This is
not critical and it is simply being noted here.

The Kernel Structures and Server structures are of mild interest. They can be
used to cross-check that the pre-allocation is what you believe it to be. The
salient line items are Number of page buffers and Number of proc buffers.

The Number of proc buffers translates directly to the number of 2K pages
available for the procedure cache.

The Number of page buffers is the number of 2K pages available for the buffer

As a side note and not trying to muddle things, these last two pieces of
information can also be obtained from the errorlog:

... Number of buffers in buffer cache: 63856.
... Number of proc buffers allocated: 15964.

In our example, we have 15,964 2K pages (~32MB) for the procedure cache and
63,856 2K pages (~126MB) for the buffer cache.

Buffer Cache

The buffer cache contains the data pages that the ASE will be either flushing
to disk or transmitting to a user connection.

If this area is too small, the ASE must flush 2K pages sooner than might be
necessary to satisfy a user connection's request.

For example, in most database applications there are small edit tables that are
used frequently by the application. These tables will populate the buffer cache
and normally will remain resident during the entire life of the ASE. This is
good because a user connection may request validation and the ASE will find the
data page(s) resident in memory. If however there is insufficient memory
configured, then these small tables will be flushed out of the buffer cache in
order to satisfy another query. The next time a validation is requested, the
tables will have to be re-read from disk in order to satisfy the request. Your
performance will degrade.

Memory access is easily an order of magnitude faster than performing a physical

In this example we know from the previous section that we have 63,856 2K pages
(or buffers) available in the buffer cache. The question to answer is, "do we
have sufficient buffer cache configured?"

The following is the output of the dbcc memusage regarding the buffer cache:

Buffer Cache, Top 20:

DB Id Object Id Index Id 2K Buffers

6 927446498 0 9424
6 507969006 0 7799
6 959446612 0 7563
6 116351649 0 7428
6 2135014687 5 2972
6 607445358 0 2780
6 507969006 2 2334
6 2135014687 0 2047
6 506589013 0 1766
6 1022066847 0 1160
6 116351649 255 987
6 927446498 8 897
6 927446498 10 733
6 959446612 7 722
6 506589013 1 687
6 971918604 0 686
6 116351649 6 387
Index Legend
| | |
| Value | Definition |
| 0 | Table data |
| 1 | Clustered index |
| 2-250 | Nonclustered |
| | indexes |
| 255 | Text pages |

* To translate the DB Id use select db_name(#) to map back to the database
* To translate the Object Id, use the respective database and use the select
object_name(#) command.

It's obvious that the first 10 items take up the largest portion of the buffer
cache. Sum these values and compare the result to the amount of buffer cache

Summing the 10 items nets a result of 45,263 2K data pages. Comparing that to
the number of pages configured, 63,856, we see that this ASE has sufficient
memory configured.

When do I need more Buffer Cache?

I follow the following rules of thumb to determine when I need more buffer

* If the sum of all the entries reported is equal to the number of pages
configured and all entries are relatively the same size. Crank it up.
* Note the natural groupings that occur in the example. If the difference
between any of the groups is greater than an order of magnitude I'd be
suspicious. But only if the sum of the larger groups is very close to the
number of pages configured.

Procedure Cache

If the procedure cache is not of sufficient size you may get sporadic 701

There is insufficient system memory to run this query.

In order to calculate the correct procedure cache one needs to apply the
following formula (found in ASE Troubleshooting Guide - Chapter 2, Procedure
Cache Sizing):

proc cache size = max(# of concurrent users) * (size of the largest plan) *

The flaw with the above formula is that if 10% of the users are
executing the largest plan, then you'll overshoot. If you have distinct
classes of connections whose largest plans are mutually exclusive then
you need to account for that:

ttl proc cache = proc cache size * x% + proc cache size * y% ...

The max(# of concurrent users) is not the number of user connections configured
but rather the actual number of connections during the peak period.

To compute the size of the largest [query] plan take the results from the dbcc
memusage's, Procedure Cache section and apply the following formula:

query plan size = [size of plans in bytes] / [number of plans]

We can compute the size of the query plan for lp_cm_case_list by using the
output of the dbcc memusage:

Database Id: 6
Object Id: 1652357121
Object Name: lp_cm_case_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 16
Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages

Entering the respective numbers, the query plan size for lp_cm_case_list is

query plan size = 339072 / 16
query plan size = 21192 bytes or 21K

The formula would be applied to all objects found in the procedure cache and
the largest value would be plugged into the procedure cache size formula:

Query Plan Sizes
| | |
| | Query |
| Object | Plan |
| | Size |
| lp_cm_case_list | 21K |
| lp_cm_subcase_list | 21K |
| csp_get_case | 19K |
| lp_get_last_caller_new | 28K |

The size of the largest [query] plan is 28K.

Entering these values into the formula:

proc cache size = max(# of concurrent users) * (size of the largest plan) *
proc cache size = 491 connections * 28K * 1.25
proc cache size = 17,185 2K pages required

Our example ASE has 15,964 2K pages configured but 17,185 2K pages are
required. This ASE can benefit by having more procedure cache configured.

This can be done one of two ways:

1. If you have some headroom in your buffer cache, then sp_configure
"procedure cache" to increase the ratio of procedure cache to buffer cache

procedure cache =
[ proposed procedure cache ] /
( [ current procedure cache ] + [ current buffer cache ] )

The new procedure cache would be 22%:

procedure cache = 17,185 / ( 15,964 + 63,856 )
procedure cache = .2152 or 22%

2. If the buffer cache cannot be shrunken, then sp_configure "memory" to
increase the total memory:

mem size =
([ proposed procedure cache ]) /
([ current procedure cache ] / [ current configured memory ])

The new memory size would be 165,399 2K pages, assuming that the
procedure cache is unchanged:

mem size = 17,185 / ( 15,964 / 153,600 )
mem size = 165,399 2K pages

Back to top


1.5.8: Why should I use stored procedures?


There are many advantages to using stored procedures (unfortunately they do not
handle the text/image types):

* Security - you can revoke access to the base tables and only allow users to
access and manipulate the data via the stored procedures.
* Performance - stored procedures are parsed and a query plan is compiled.
This information is stored in the system tables and it only has to be done
* Network - if you have users who are on a WAN (slow connection) having
stored procedures will improve throughput because less bytes need to flow
down the wire from the client to ASE.
* Tuning - if you have all your SQL code housed in the database, then it's
easy to tune the stored procedure without affecting the clients (unless of
course the parameter change).
* Modularity - during application development, the application designer can
concentrate on the front-end and the DB designer can concentrate on the
* Network latency - a client on a LAN may seem slower if it is sending large
numbers of separate requests to a database server, bundling them into one
procedure call may improve responsiveness. Also, servers handling large
numbers of small requests can spend a surprising amount of CPU time
performing network IO.
* Minimise blocks and deadlocks - it is a lot easier to handle a deadlock if
the entire transaction is performed in one database request, also locks
will be held for a shorter time, improving concurrency and potentially
reducing the number of deadlocks. Further, it is easier to ensure that all
tables are accessed in a consistent order if code is stored centrally
rather than dispersed among a number of apps.

Back to top


David Owen

Jan 21, 2004, 4:26:03 AM1/21/04
Archive-name: databases/sybase-faq/part6

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Advanced ASE Administration

1.3.1 How do I clear a log suspend'd connection?
1.3.2 What's the best value for cschedspins?
1.3.3 What traceflags are available?
1.3.4 How do I use traceflags 5101 and 5102?
1.3.5 What is cmaxpktsz good for?
1.3.6 What do all the parameters of a buildmaster -d<device> -yall mean?
1.3.7 What is CIS and how do I use it?
1.3.8 If the master device is full how do I make the master database
1.3.9 How do I run multiple versions of Sybase on the same server?
1.3.10 How do I capture a process's SQL?

General Troubleshooting User Database Administration ASE FAQ


1.3.1 How to clear a log suspend


A connection that is in a log suspend state is there because the transaction
that it was performing couldn't be logged. The reason it couldn't be logged is
because the database transaction log is full. Typically, the connection that
caused the log to fill is the one suspended. We'll get to that later.

In order to clear the problem you must dump the transaction log. This can be
done as follows:

dump tran db_name to data_device

At this point, any completed transactions will be flushed out to disk. If you
don't care about the recoverability of the database, you can issue the
following command:

dump tran db_name with truncate_only

If that doesn't work, you can use the with no_log option instead of the with

After successfully clearing the log the suspended connection(s) will resume.

Unfortunately, as mentioned above, there is the situation where the connection
that is suspended is the culprit that filled the log. Remember that dumping the
log only clears out completed transaction. If the connection filled the log
with one large transaction, then dumping the log isn't going to clear the

System 10

What you need to do is issue an ASE kill command on the connection and then
un-suspend it:

select lct_admin("unsuspend", db_id("db_name"))

System 11

See Sybase Technical News Volume 6, Number 2

Retaining Pre-System 10 Behaviour

By setting a database's abort xact on log full option, pre-System 10 behaviour
can be retained. That is, if a connection cannot log its transaction to the log
file, it is aborted by ASE rather than suspended.

Return to top


1.3.2 What's the best value for cschedspins?


It is crucial to understand that cschedspins is a tunable parameter
(recommended values being between 1-2000) and the optimum value is completely
dependent on the customer's environment. cschedspins is used by the scheduler
only when it finds that there are no runnable tasks. If there are no runnable
tasks, the scheduler has two options:

1. Let the engine go to sleep (which is done by an OS call) for a specified
interval or until an event happens. This option assumes that tasks won't
become runnable because of tasks executing on other engines. This would
happen when the tasks are waiting for I/O more than any other resource such
as locks. Which means that we could free up the CPU resource (by going to
sleep) and let the system use it to expedite completion of system tasks
including I/O.
2. Go and look for a ready task again. This option assumes that a task would
become runnable in the near term and so incurring the extra cost of an OS
context switch through the OS sleep/wakeup mechanism is unacceptable. This
scenario assumes that tasks are waiting on resources such as locks, which
could free up because of tasks executing on other engines, more than they
wait for I/O.

cschedspins controls how many times we would choose option 2 before choosing
option 1. Setting cschedspins low favours option 1 and setting it high favours
option 2. Since an I/O intensive task mix fits in with option 1, setting
cschedspins low may be more beneficial. Similarly since a CPU intensive job mix
favours option 2, setting cschedspins high may be beneficial.

The consensus is that a single CPU server should have cschedspins set to 1.
However, I strongly recommend that users carefully test values for cschedspins
and monitor the results closely. I have seen more than one site that has shot
themselves in the foot so to speak due to changing this parameter in production
without a good understanding of their environment.

Return to top


1.3.3 Trace Flag Definitions


To activate trace flags, add them to the RUN_* script. The following example is
using the 1611 and 260 trace flags. Note that there is no space between the
'-T' and the traceflag, despite what is written in some documentation.

Use of these traceflags is not recommended by Sybase. Please use at your
own risk.

% cd ~sybase/install
# SQL Server Information:
# name: BLAND
# master device: /usr/sybase/dbf/BLAND/master.dat
# master device size: 25600
# errorlog: /usr/sybase/install/errorlog_BLAND
# interfaces: /usr/sybase
/usr/sybase/dataserver -d/usr/sybase/dbf/BLAND/master.dat \
-sBLAND -e/usr/sybase/install/errorlog_BLAND -i/usr/sybase \
-T1611 -T260

Trace Flags
| | |
| Flag | Description |
| 108 | (Documented) To allow dynamic and host variables in create view |
| | statements in ASE 12.5 and above. |
| 200 | Displays messages about the before image of the query-tree. |
| 201 | Displays messages about the after image of the query-tree. |
| 241 | Compress all query-trees whenever the SQL dataserver is started. |
| | Reduce TDS (Tabular Data Stream) overhead in stored procedures. Turn |
| | off done-in-behaviour packets. Do not use this if your application |
| | is a ct-lib based application; it'll break. |
| 260 | |
| | Why set this on? Glad you asked, typically with a db-lib application |
| | a packet is sent back to the client for each batch executed within a |
| | stored procedure. This can be taxing in a WAN/LAN environment. |
| | Changes the hierarchy and casting of datatypes to pre-11.5.1 |
| | behaviour. There was an issue is some very rare cases where a wrong |
| | result could occur, but that's been cleared up in 11.9.2 and above. |
| | |
| 291 | The trace can be used at boot time or at the session level. Keep in |
| | mind that it does not disqualify a table scan from occurring. What |
| | it will do is result in fewer datatype mismatch situations and thus |
| | the optimizer will be able to estimate the costs of SARGs and joins |
| | on columns involved in a mismatch. |
| 299 | This trace flag instructs the dataserver to not recompile a child |
| | stored procedure that inherits a temp table from a parent procedure. |
| 302 | Print information about the optimizer's index selection. |
| 303 | Display OR strategy |
| | Revert special or optimizer strategy to that strategy used in |
| 304 | pre-System 11 (this traceflag resolved several bug issues in System |
| | 11, most of these bugs are fixed in ASE |
| 310 | Print information about the optimizer's join selection. |
| 311 | Display the expected IO to satisfy a query. Like statistics IO |
| | without actually executing. |
| 317 | Provide extra optimization information. |
| 319 | Reformatting strategies. |
| 320 | Turn off the join order heuristic. |
| 324 | Turn off the like optimization for ad-hoc queries using |
| | @local_variables. |
| | (Only valid in ASE versions prior to 11.9.2.) Instructs the server |
| | to use arithmetic averaging when calculating density instead of a |
| 326 | geometric weighted average when updating statistics. Useful for |
| | building better stats when an index has skew on the leading column. |
| | Use only for updating the stats of a table/index with known skewed |
| | data. |
| | |
| 602 | Prints out diagnostic information for deadlock prevention. |
| 603 | Prints out diagnostic information when avoiding deadlock. |
| 699 | Turn off transaction logging for the entire SQL dataserver. |
| 1204 | Send deadlock detection to the errorlog. |
| * | |
| 1205 | Stack trace on deadlock. |
| 1206 | Disable lock promotion. |
| 1603 | Use standard disk I/O (i.e. turn off asynchronous I/O). |
| * | |
| 1605 | Start secondary engines by hand |
| | Create a debug engine start file. This allows you to start up a |
| | debug engine which can access the server's shared memory for running |
| | diagnostics. I'm not sure how useful this is in a production |
| 1606 | environment as the debugger often brings down the server. I'm not |
| | sure if Sybase have ported the debug stuff to 10/11. Like most of |
| | their debug tools it started off quite strongly but was never |
| | developed. |
| | Startup only engine 0; use dbcc engine("online") to incrementally |
| 1608 | bring up additional engines until the maximum number of configured |
| | engines. |
| 1610 | Boot the SQL dataserver with TCP_NODELAY enabled. |
| * | |
| 1611 | If possible, pin shared memory -- check errorlog for success/ |
| * | failure. |
| 1613 | Set affinity of the SQL dataserver engine's onto particular CPUs -- |
| | usually pins engine 0 to processor 0, engine 1 to processor 1... |
| 1615 | SGI only: turn on recoverability to filesystem devices. |
| | Linux only: Revert to using cached filesystem I/O. By default, ASE |
| 1625 | on Linux (11.9.2 and above) opens filesystem devices using O_SYNC, |
| | unlike other Unix based releases, which means it is safe to use |
| | filesystems devices for production systems. |
| 2512 | Prevent dbcc from checking syslogs. Useful when you are constantly |
| | getting spurious allocation errors. |
| 3300 | Display each log record that is being processed during recovery. You |
| | may wish to redirect stdout because it can be a lot of information. |
| 3500 | Disable checkpointing. |
| 3502 | Track checkpointing of databases in errorlog. |
| 3601 | Stack trace when error raised. |
| 3604 | Send dbcc output to screen. |
| 3605 | Send dbcc output to errorlog. |
| 3607 | Do not recover any database, clear behaviour start up checkpoint |
| | process. |
| 3608 | Recover master only. Do not clear tempdb or start up checkpoint |
| | process. |
| 3609 | Recover all databases. Do not clear tempdb or start up checkpoint |
| | process. |
| 3610 | Pre-System 10 behaviour: divide by zero to result in NULL instead of |
| | error - also see Q6.2.5. |
| 3620 | Do not kill infected processes. |
| 4001 | Very verbose logging of each login attempt to the errorlog. Includes |
| | tons of information. |
| 4012 | Don't spawn chkptproc. |
| 4013 | Place a record in the errorlog for each login to the dataserver. |
| 4020 | Boot without recover. |
| | Forces all I/O requests to go through engine 0. This removes the |
| 5101 | contention between processors but could create a bottleneck if |
| | engine 0 becomes busy with non-I/O tasks. For more information... |
| | 5101/5102. |
| 5102 | Prevents engine 0 from running any non-affinitied tasks. For more |
| | information...5101/5102. |
| 7103 | Disable table lock promotion for text columns. |
| 8203 | Display statement and transaction locks on a deadlock error. |
| * | Starting with System 11 these are sp_configure'able |

Return to top


1.3.4 Trace Flags -- 5101 and 5102



Normally, each engine issues and checks for its own Disk I/O on behalf of the
tasks it runs. In completely symmetric operating systems, this behavior
provides maximum I/O throughput for ASE. Some operating systems are not
completely symmetric in their Disk I/O routines. For these environments, the
server can be booted with the 5101 trace flag. While tasks still request disk I
/O from any engine, the actual request to/from the OS is performed by engine 0.
The performance benefit comes from the reduced or eliminated contention on the
locking mechanism inside the OS kernel. To enable I/O affinity to engine 0,
start ASE with the 5101 Trace Flag.

Your errorlog will indicate the use of this option with the message:

Disk I/O affinitied to engine: 0

This trace flag only provides performance gains for servers with 3 or more
dataserver engines configured and being significantly utilized.

Use of this trace flag with fully symmetric operating systems will degrade


The 5102 trace flag prevents engine 0 from running any non-affinitied tasks.
Normally, this forces engine 0 to perform Network I/O only. Applications with
heavy result set requirements (either large results or many connections issuing
short, fast requests) may benefit. This effectively eliminates the normal
latency for engine 0 to complete running its user thread before it issues the
network I/O to the underlying network transport driver. If used in conjunction
with the 5101 trace flag, engine 0 would perform all Disk I/O and Network I/O.
For environments with heavy disk and network I/O, engine 0 could easily
saturate when only the 5101 flag is in use. This flag allows engine 0 to
concentrate on I/O by not allowing it to run user tasks. To force task affinity
off engine 0, start ASE with the 5102 Trace Flag.

Your errorlog will indicate the use of this option with the message:

I/O only enabled for engine: 0

Warning: Not supported by Sybase. Provided here for your enjoyment.

Return to top


1.3.5 What is cmaxpktsz good for?


cmaxpktsz corresponds to the parameter "maximum network packet size" which you
can see through sp_configure. I recommend only updating this value through
sp_configure. If some of your applications send or receive large amounts of
data across the network, these applications can achieve significant performance
improvement by using larger packet sizes. Two examples are large bulk copy
operations and applications reading or writing large text or image values.
Generally, you want to keep the value of default network packet size small for
users performing short queries, and allow users who send or receive large
volumes of data to request larger packet sizes by setting the maximum network
packet size configuration variable.

caddnetmem corresponds to the parameter "additional netmem" which you can see
through sp_configure. Again, I recommend only updating this value through
sp_configure. "additional netmem" sets the maximum size of additional memory
that can be used for network packets that are larger than ASE's default packet
size. The default value for additional netmem is 0, which means that no extra
space has been allocated for large packets. See the discussion below, under
maximum network packet size, for information on setting this configuration
variable. Memory allocated with additional netmem is added to the memory
allocated by memory. It does not affect other ASE memory uses.

ASE guarantees that every user connection will be able to log in at the default
packet size. If you increase maximum network packet size and additional netmem
remains set to 0, clients cannot use packet sizes that are larger than the
default size: all allocated network memory will be reserved for users at the
default size. In this situation, users who request a large packet size when
they log in receive a warning message telling them that their application will
use the default size. To determine the value for additional netmem if your
applications use larger packet sizes:

* Estimate the number of simultaneous users who will request the large packet
sizes, and the sizes their applications will request.
* Multiply this sum by three, since each connection needs three buffers.
* Add 2% for overhead, rounded up to the next multiple of 512

Return to top


1.3.6 Buildmaster Configuration Definitions


Attention! Please notice, be very careful with these parameters. Use only
at your own risk. Be sure to have a copy of the original parameters. Be
sure to have a dump of all dbs (include master) handy.

Since the release of 11.x (and above), there is almost no need for
buildmaster to configure parameters. In fact, buildmaster has gone been
removed from ASE 12.5. This section is really kept for anyone out there
running old versions of ASE. I still see the odd post from people asking
about 4.9.2, so this is for you.

Anyone else who feels a need to use buildmaster should check sp_configure
and/or SERVERNAME.cfg to see if the configuration parameter is there before
using buildmaster.



The following is a list of configuration parameters and their effect on the
ASE. Changes to these parameters can affect performance of the server. Sybase
does not recommend modifying these parameters without first discussing the
change with Sybase Tech Support. This list is provided for information only.

These are categorized into two kinds:

* Configurable through sp_configure and
* not configurable but can be changed through 'buildmaster -y<variable>=value

Configurable variables:


The recovery interval specified in minutes.


A flag to inform whether system catalogs can be updated or not.


This is the number of user connections allowed in SQL
Server. This value + 3 (one for checkpoint, network
and mirror handlers) make the number of pss configured
in the server.


Number of PSS configured in the server. This value will
always be 3 more than cusrconnections. The reason is we
need PSS for checkpoint, network and mirror handlers.



The total memory configured for the Server in 2k
units. This is the memory the server will use for both
Server and Kernel Structures. For Stratus or any 4k
pagesize implementation of ASE, certain values
will change as appropriate.


This is the number of databases that can be open in SQL
Server at any given time.


Variable that defines and controls the number of logical
locks configured in the system.


This is the number of open objects that can be open at
a given point of time.


This is the percentage of cache that should be used
for procedures to be cached in.


Fill factor for indexes.


This value is in units of milli-seconds. This value determines
how much time a task is allowed to run before it yields.
This value is internally converted to ticks. See below
the explanations for cclkrate, ctimemax etc.


The default size of the database when it is created.
This value is Megabytes and the default is 2Meg.


An outdated not used variable.


A toggle flag which will display certain recovery information
during database recoveries.


An informational variable that stores the serial number
of the product.


Flag that controls whether nested triggers allowed or not.


Variable that controls the number of device structures
that are allocated which affects the number of devices
that can be opened during server boot up. If user
defined 20 devices and this value is configured to be
10, during recovery only 10 devices will be opened and
the rest will get errors.
This variable controls maximum number of site handler
structures that will be allocated. This in turn
controls the number of site handlers that can be
active at a given instance.
This variable controls the number of remote buffers
that needs to send and receive from remote sites.
Actually this value should be set to number of
logical connections configured. (See below)
This is the number of logical connections that can
be open at any instance. This value controls
the number of resource structure allocated and
hence it will affect the overall logical connection
combined with different sites. THIS IS NOT PER SITE.


Maximum number of pre-read packets per logical connections.
If logical connection is set to 10, and cfgdatabuf is set
to 3 then the number of resources allocated will be


Version number of last upgrade program ran on this server.


Sort order of ASE.


When sort orders are changed the old sort order is
saved in this variable to be used during recovery
of the database after the Server is rebooted with
the sort order change.


Character Set used by ASE


Same as cold_sortord except it stores the previous
Character Set.


page # of sort order image definition. This should
not be changed at any point. This is a server only


page # of character set image definition. This should
not be changed at any point. This is a server only


page # of previous sort order image definition. This
should not be changed at any point. This is a server
only variable.


page # of previous chracter set image definition. This
should not be changed at any point. This is a server
only variable.


Default language used by ASE.


Maximum number of engines that can be made online. This
number should not be more than the # of cpus available on this
system. On Single CPU system like RS6000 this value is always


Minimum number of engines that should be online. This is 1 by


A noop variable at this time.


Stack size per task configured. This doesn't include the guard
area of the stack space. The guard area can be altered through


This is the size of the guard area. ASE will
allocate stack space for each task by adding cfgstacksz
(configurable through sp_configure) and cguardsz (default is
2K). This has to be a multiple of PAGESIZE which will be 2k
or 4k depending on the implementation.


Size of fixed stack space allocated per task including the
guard area.

Non-configurable values :




1 millisecond = 1/1000th of a second.
1 microsecond = 1/1000000th of a second. "Tick" : Interval between two clock
interrupts occur in real time.

"cclkrate" :

A value specified in microsecond units.
Normally on systems where a fine grained timer is not available
or if the Operating System cannot set sub-second alarms, this
value is set to 1000000 milliseconds which is 1 second. In
other words an alarm will go off every 1 second or you will
get 1 tick per second.

On Sun4 this is set to 100000 milliseconds which will result in
an interrupt going at 1/10th of a second. You will get 6 ticks
per second.

"avetimeslice" :

A value specified in millisecond units.
This is the value given in "sp_configure",<timeslice value>.
Otherwise the milliseconds are converted to milliseconds and
finally to tick values.

ticks = <avetimeslice> * 1000 / cclkrate.

"timeslice" :

The unit of this variable is in ticks.
This value is derived from "avetimeslice". If "avetimeslice"
is less than 1000 milliseconds then timeslice is set to 1 tick.

"ctimemax" :

The unit of this variable is in ticks.

A task is considered in infinite loop if the consumed ticks
for a particular task is greater than ctimemax value. This
is when you get timeslice -201 or -1501 errors.

"cschedspins" :

For more information see Q1.3.2.

This value alters the behavior of ASE scheduler.
The scheduler will either run a qualified task or look
for I/O completion or sleep for a while before it can
do anything useful.

The cschedspins value determines how often the scheduler
will sleep and not how long it will sleep. A low value
will be suited for a I/O bound ASE but a
high value will be suited for CPU bound ASE. Since
ASE will be used in a mixed mode, this value
need to be fined tuned.

Based on practical behavior in the field, a single engine
ASE should have cschedspins set to 1 and a multi-engine
server should have set to 2000.

Now that we've defined the units of these variables what happens when we change
cclkrate ?

Assume we have a cclkrate=100000.

A clock interrupt will occur every (100000/1000000) 1/10th milliseconds.
Assuming a task started with 1 tick which can go up to "ctimemax=1500" ticks
can potentially take 1/10us * (1500 + 1) ticks which will be 150 milliseconds
or approx. .15 milliseconds per task.

Now changing the cclkrate to 75000

A clock interrupt will occur every (75000/1000000) 1/7th milliseconds. Assuming
a task started with 1 tick which can go up to ctimemax=1500 ticks can
potentially take 1/7us * (1500 + 1) ticks which will be 112 milliseconds or
approx. .11 milliseconds per task.

Decreasing the cclkrate value will decrease the time spent on each task. If the
task could not voluntarily yield within the time, the scheduler will kill the

UNDER NO CIRCUMSTANCES the cclkrate value should be changed. The default
ctimemax value should be set to 1500. This is an empirical value and this can
be changed under special circumstances and strictly under the guidance of DSE.



Name of the master device is saved here. This is 64
bytes in length.


This is a derived value from cusrconnections + 3.
See cusrconnections above.


This value defines the number of transactions that
can be done by a task at a given instance.
Changing this value to be more than 32 will have no
effect on the server.
This value defines the number of open tables per
task. This will be typically for a query. This
will be the number of tables specified in a query
including subqueries.

Sybase Advises not to change this value. There
will be significant change in the size of per user
resource in ASE.


This is a derived variable based on the total
memory configured and subtracting different resource
sizes for Databases, Objects, Locks and other
Kernel memories.


This is same as cdesnum. Other values will have no effect on it.


This is a derived value. Based on cpcacheprcnt variable.


This is same as clocknum. Other values will have no effect on it.


This is variable that defines the number of cache protectors per
task. This is used internally by ASE.

Sybase advise not to modify this value as a default of 15 will
be more than sufficient.


This is a derived value based on cusrconnections + <extra> for
Sybase internal tasks that are both visible and non-visible.


This is an internal variable that will keep track of ASE

Modifying this value will not have any effect.


Number of mail box structures that need to be allocated.
More used in VMS environment than UNIX environment.


Used in tandem with cnmbox.


Maximum number of messages that can be passed between mailboxes.


Number of disk I/O request (async and direct) that can be
processed at a given instance. This is a global value for all
the engines and not per engine value.

This value is directly depended on the number of I/O request
that can be processed by the Operating System. It varies
depending on the Operating System.


Maximum number of I/O request that can be processed at any given

Normally cnblkio,cnblkmax and cnmaxaio_server should be the same.


Maximum number of I/O request that can be processed by one engine.
Since engines are Operating System Process, if there is any limit
imposed by the Operating System on a per process basis then
this value should be set. Otherwise it is a noop.


This is the total number of I/O request ASE can do.
This value s directly depended on the number of I/O request
that can be processed by the Operating System. It varies
depending on the Operating System.


not used.


Similar to disk I/O request, this is for network I/O request.
This includes disk/tape dumps also. This value is for
the whole ASE including other engines.


Maximum number of network I/O request including disk/tape dumps.


Maximum number of alarms including the alarms used by
the system. This is typically used when users do "waitfor delay"


Mirror device name for the master device.


Status of mirror devices for the master device like serial/dynamic
mirroring etc.


This value determines the ageing of a index buffer before it
is removed from the cache.


This value determines the aging of a OAM buffer before it
is removed from the cache.


This value determines the number of extents that will be
allocated while doing BCP.


This value determines when to flush buffers in the cache
that are modified.

Return to top


1.3.7: What is CIS and how can I use it?


CIS is the new name for Omni ASE. The biggest difference is that CIS is
included with Adaptive Server Enterprise as standard. Actually, this is not
completely accurate; the ability to connect to other ASEs and ASEs, including
Microsoft's, is included as standard. If you need to connect to DB2 or Oracle
you have to obtain an additional licence.

So, what is it?

CIS is a means of connecting two servers together so that seamless cross-server
joins can be executed. It is not just restricted to selects, pretty much any
operation that can be performed on a local table can also be performed on a
remote table. This includes dropping it, so be careful!

What servers can I connect to?

* Sybase ASE
* Microsoft SQL Server
* Oracle

What are the catches?

Well, nothing truly comes for free. CIS is not a means of providing true load
sharing, although you will find nothing explicitly in the documentation to tell
you this. Obviously there is a performance hit which seems to affect cursors
worst of all. CIS itself is implemented using cursors and this may be part of
the explanation.

OK, so how do I use it?

Easy! Add the remote server using sp_addserver. Make sure that you define it
as type sql_server or ASEnterprise. Create an "existing" table using the
definition of the remote table. Update statistics on this new "existing"
table. Then simply use it in joins exactly as if it were a local table.

Return to top


1.3.8: If the master device is full, how do I make the master database bigger?


It is not possible to extend the master database across another device, so the
following from Eric McGrane (recently of Sybase Product Support Engineering)
should help.

* dump the current master database
* Pre-12.5 users use buildmaster to create a new master device with a larger
size. ASE 12.5 users use dataserver to build the new, larger, master
* start the server in single user mode using the new master device
* login to the server and execute the following tsql:

select * from sysdevices

* take note of the high value
* load the dump of the master you had just taken
* restart the server (as it will be shut down when master is done loading),
in single user mode so that you can update system tables
* login to the server and update sysdevices setting high for master to the
that you noted previously
* shut the server down and start it back up, but this time not in single user

The end result of the above is that you will now have a larger master device
and you can alter your master database to be a larger size. For details about
starting the server in single user mode and how to use buildmaster (if you need
the details) please refer to the documentation.

Return to top


1.3.9: How do I run multiple versions of Sybase on the same server?


The answer to this relies somewhat on the platform that you are using.


ASE Versions Before 12.0

This applies to Unix and variants, Linux included. Install the various releases
of software into logical places within your filesystem. I like to store all
application software below a single directory for ease of maintenance, choose
something like /sw. I know that some are keen on /opt and others /usr/local. It
is all down to preference and server usage. If you have both Oracle and Sybase
on the same server you might want /sw/sybase or /opt/sybase. Be a little
careful here if your platform is Linux or FreeBSD. The standard installation
directories for Sybase on those platforms is /opt/sybase. Finally, have a
directory for the release, say ASE11_9_2 or simply 11.9.2 if you only ever have
Sybase ASE running on this server. A little imagination is called for!

So, now you have a directory such as /sw/sybase/ASE/11.9.2 (my preferred choice
:-), and some software installed under the directories, what now? In the most
minimal form, that is all you need. Non of the environment variables are
essential. You could quite successfully run

/sw/sybase/ASE/11.9.2/bin/isql -Usa -SMYSERV -I/sw/sybase/ASE/11.9.2/interfaces

and get to the server, but that is a lot of typing. By setting the SYBASE
environment variable to /sw/sybase/ASE/11.9.2 you never need tell isql or other
apps where to find the interfaces. Then, you can set the path with a cool


to pick up the correct set of Sybase binaries. That reduces the previous mass
of typing to

isql -Usa -SMYSERV

which is much more manageable.

You can create yourself a couple of shell scripts to do the changes for you. So
if the script a11.9 contained:


# Remember to export the variables!

and a11.0 contained:


# Remember to export the variables!

you would toggle between being connect to and 11.9.2 server and a 12.0 server,
depending upon which one you executed last. The scripts are not at all
sophisticated, you could quite easily have one script and pass a version string
into it. You will notice that the PATH variable gets longer each time the
script is executed. You could add greps to see if there was already a Sybase
instance on the path. Have I mentioned imagination?

ASE 12.0 and Beyond

Sybase dramatically changed the structure of the installation directory tree
with ASE 12. You still have a SYBASE environment variable pointing to the
route, but now the various packages fit below that directory. So, if we take /
sw/sybase as the root directory, we have the following (the following is for a
12.5 installation, but all versions follow the same format):


Below ASE-12_5 is most of the stuff that we have come to expect under $SYBASE,
the install, bin and scripts directories. This is also where the SERVER.cfg
file has moved to. (Note the the interfaces file is still in $SYBASE.) The bin
directory on this side includes the dataserver, diagserver and srvbuild

The OCS-12_5 is the open client software directory. It means that Sybase can
update the client software without unduly affecting the server. isql, bcp and
other clients are to be found here.

It does take a little getting used to if you have been using the pre-12 style
for a number of years. However, in its defence, it is much more logical, even
if it about triples the length of your PATH variable!

That is another good part of the new installation. Sybase actually provides you
with the shell script to do all of this. There is a file in /sw/sybase called (there is an equivalent C shell version in the same place) that sets
everything you need!

Interfaces File

The only real addition to all of the above is an easier way to manage the
interfaces file. As mentioned before, ASE based apps look for the interfaces
file in $SYBASE/interfaces by default. Unix is nice in that it allows you to
have symbolic links that make it appear as if a file is somewhere that it
isn't. Place the real interfaces file somewhere independent of the software
trees. /sw/sybase/ASE/interfaces might be a sound logical choice. Now, cd to
$SYBASE and issue

ln -s /sw/sybase/ASE/interfaces

and the interfaces will appear to exist in the $SYBASE directory, but will in
fact remain in its own home.

Note: make sure that interfaces file is copied to its own home before removing
it from $SYBASE.

Now you can put symbolic links in each and every software installation and only
have to worry about maintaining the server list, on that server, in one place.
Having the interfaces file common to many physical servers is trickier, but not
impossible. Personally I would choose to put it in a central CVS repository and
use that to keep each server reasonably up-to-date.


Firstly, I have tried the following on W2K and it all works OK. I have read a
number of reports of people having difficulty getting clean installs under NT.
11.5 and 12.0 mainly. I cannot remeber having a problem with either of those
myself, but I only ever installed it to test that stuff I write runs on all
platforms. I have no intention of upgrading to XP until MS pays me to do it. It
looks like a cheap plastic version of an operating system and I pity anyone
that is forced to use it.

NT is tougher than UNIX to run multiple instances on, mainly due to the fact
that it wants to do stuff for you in the background, namely configure
environment variables. The following worked for me with the following versions
of Sybase ASE all installed and running on a single server: 11.5.1, 11.9.2,
12.5. I don't have a version of ASE 12.0 for NT. If I can persuade Sybase to
send them it to me, I might be able to get that running too. Notably, each and
every one of the databases runs as a service!!!

1. Start by installing each software release into its own area. Make sure that
it is a local disk. (See Q2.2.3.) I chose to install ASE 12.5 into C:\
Sybase12_5 and ASE 11.9.2 into C:\Sybase11_9_2 etc. When it asks you about
configuring the server, select "no" or "cancel".
2. Add a user for each installation that you are going to run. Again, I added
a user sybase12_5 for ASE 12.5 and sybase11_9_2 for ASE 11.9.2.
3. As a system account, edit the environment variables (On W2K this is
Settings->Control Panel->System->Advanced->Environment Variables...) and
remove any reference to Sybase from the system path. Make sure that you
store away what has been set. A text file on your C drive is a good idea at
this stage.
4. Similarly, remove references to Sybase from the Lib, Include and CLASSPATH
variables, storing the strings away.
5. Remove the SYBASE, DSEDIT and DSQUERY variable.
6. As I said before, I do not own 12.0, so I cannot tell you what to do about
etc. I can only assume that you need to cut them out too. If you are
installing pre-12 with only 1 of 12 or 12.5, then it is not necessary.
7. Login as each new Sybase user in turn and add to each of these a set of
local variables corresponding to path, Include, Lib and set them to be the
appropriate parts from the strings you removed from the system versions
above. So, if you installed ASE 12.5 in the method described, you will have
a whole series of variables with settings containing "C:\Sybase_12_5", add
all of these to local variables belonging to the user sybase12_5. Repeat
for each instance of ASE installed. This is a tedious process and I don't
know a way of speading it up. It may be possible to edit the registry, but
I was not happy doing that.
8. If you have made each of the Sybase users administrators, then you can
configure the software from that account, and install a new ASE server.
Remember that each one needs its own port. 11.5.1 and 11.9.2 did not give
me an option to change the port during the install, so I had to do that
afterwards by editing the SQL.INI for each server in its own installation
9. If you are not able to make each user and administrator, you will need to
work with an admin to configure the software. (ASE requires administrative
rights in order to be able to add the service entries.) You will need to
log in as this admin account, set the path to the appropriate value for
each installation, install the software and then set the path to the new
values, install the next ASE etc. On NT for sure you will have to log out
and log in after changing the path variable. 2000 may be less brain dead.
Just be thankful you are not having to reboot!
10. Log back in as your tame administrator account and go into the control
panel. You need to start the "Services" applet. This is either there if you
are running NT or you have to go into "Administrative Tools" for 2000.
Scroll down and select the first of the services, which should be of the

"Sybase SQLServer _MYSERVER".

Right click and select "Properties" (I think this is how it was for NT, but
you want that services properties, however you get there.) In 2000 there is
a "Log On" tab. NT has a button (I think) that serves the same purpose.
Whether tab or button, click on it. You should have a panel that starts, at
the top, with "Log on as" and a a pair of radio options. The top one will
probably be selected, "Local System account". Choose the other and enter
the details for the sybase account associated with this server. So if the
server is ASE 12.5 enter "sybase12_5" for "This account" and enter the
password associated with this account in the next two boxes. Select enough
"OK"s to take you out of the service properties editor.
11. None of the installations made a good job of the services part. All of them
added services for all of the standard servers (data, backup, monitor and
XP), even though I had not configured any but XP server. (The NT
installation is of a different form to the UNIX/Linux versions.) The 12.5
XP configuration was OK, but the pre-12 ones were not. You will have to go
in and manually set the user to connect as (as described earlier). If you
do not do this, the services will not start properly.
12. You should then be able to start any or all of the services by pressing the
"play" button.
13. Finally, you need to re-edit the local copies of the path, Include and Lib
variables for your tame admin account if you use that account to connect to

It worked for me, as I said. I was able to run all 3 services simultaneously
and connect from the local and external machines. There is no trick as neat as
the symbolic link on Unix. Links under NT work differently.

Return to top


1.3.10: How do I capture a process's SQL?


This is a bit of a wide question, and there are many answers to it. Primarily,
it depends on why you are trying to capture it. If you are trying to debug a
troublesome stored procedure that is behaving differently in production to how
it did in testing, then you might look at the DBCC method. Alternatively, if
you wanted to do some longer term profiling, then auditing or one of the third
party tools might be the way forward. If you know of methods that are not
included here, please let me know.


If you want to look at the SQL a particular process is running at the moment,
one of the following should work. Not sure which versions of ASE these work
with. Remember to issue dbcc traceon(3604) before running any of the dbcc's so
that you can see the output at your terminal.

* dbcc sqltext(spid)
* dbcc pss(0, spid, 0)

The first of the commands issues the SQL of the spid only a bit like this:

[27] BISCAY.master.1> dbcc sqltext(9)
[27] BISCAY.master.2> go
SQL Text: select spid, status, suser_name(suid), hostname,
db_name(dbid), cmd, cpu, physical_io, memusage,
convert(char(5),blocked) from master..sysprocesses
DBCC execution completed. If DBCC printed error messages, contact a user with
System Administrator (SA) role.
[28] BISCAY.master.1>

The second issues an awful lot of other stuff before printing the text at the
bottom. Mercifully, this means that you don't have to scroll up to search for
the SQL text, which is in much the same format as with dbcc sqltext.

There are a number of third party tools that will execute these commands from a
list of processes. One of the problems is that you do have to be 'sa' or have
'sa_role' in order to run them.

Certainly the first, and possibly both, have one major drawback, and that is
that they are limited to displaying about 400 bytes worth of text, which can be
a bit annoying. However, if what you are trying to do is catch a piece of rogue
SQL that is causing a table scan or some other dastardly trick, a unique
comment in the early part of the query will lead to its easy identification.

Monitor Server

Since ASE 11.5, monitor server has had the capability for capturing a processes
SQL. See Q1.6.2 for how to configure a Monitor Server Client. When you are
done, you can get see the SQL text from a process using the "Process Current
SQL Statement" monitor. The output looks like this.


The second way of wanting to do this is for a number of processes for a period
of time. There are several methods of doing this. Probably the most popular is
to use auditing, and it is almost certainly the most popular because it
requires no additional software purchases.

Auditing is a very powerful tool that can collect information on just about
everything that happens on the server. It can be configured to capture
'cmdtext' for any or all users on a system. The data will be loaded into the
sysaudits database for later perusal. The SQL captured is not limited to a
number of bytes, like the previous examples, but if it is more than 255 bytes
long, then it will span several audit records, which must be put back together
to see the whole picture. To be honest, I am not sure what happens now that
varchars can be greater than 255 bytes in length. Personal experience with
auditing leaves to think that the load on the server is up to about 3%,
depending on the number of engines you have (the more engines, the more of a
load auditing is) and, obviously, the number of processes you wish to monitor.
I calculated 3% based on auditing all of 400 users, each of which had 2
connections to the server, on a server with 7 engines.


Another option for capturing the SQL text is to use the free Ribo utility that
is provided with as part of ASE these days. This is a small server written in
Java as an example of what can be done using jConnect. This utility is nice in
that it does not place any load on the ASE server. However, it probably has an
effect on the client that is using it. This utility's other draw back is that
each client that you wish to monitor via Ribo must be directly configured to
use it. It is not possibly mid-session to just magically turn it on.

The way it works is to act as an intermediary between the ASE server and the
client wishing to connect. All is SQL is passed through and executed exactly as
if the client was directly connected, and the results passed back. What the
Ribo server does is enable you to save the inbound SQL to a file.

3rd Party Tools

Again, there are a number of third party tools that do this job as well,
OpenSwitch being one of them. There are also a number of third party tools that
do a better job than this. They do not have any impact on the client or the
server. They work by sniffing the network for relevant packets and then put
them pack together. In actuality, they do a lot more than just generate the
SQL, but they are capable of that.

Return to top


General Troubleshooting User Database Administration ASE FAQ

David Owen

Jan 21, 2004, 4:26:06 AM1/21/04
Archive-name: databases/sybase-faq/part12

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

SQL Fundamentals

6.1.1 Are there alternatives to row at a time processing?
6.1.2 When should I execute an sp_recompile?
6.1.3 What are the different types of locks and what do they mean?
6.1.4 What's the purpose of using holdlock?
6.1.5 What's the difference between an update in place versus a deferred
update? - see Q1.5.9
6.1.6 How do I find the oldest open transaction?
6.1.7 How do I check if log truncation is blocked?
6.1.8 The timestamp datatype
6.1.9 Stored Procedure Recompilation and Reresolution
6.1.10 How do I manipulate binary columns?
6.1.11 How do I remove duplicate rows from a table?

SQL Advanced bcp ASE FAQ


6.1.1: Alternative to row at a time processing


Someone asked how they could speed up their processing. They were batch
updating/inserting gobs of information. Their algorithm was something as

... In another case I do:

If exists (select record) then
update record
insert record

I'm not sure which way is faster or if it makes a difference. I am doing
this for as many as 4000 records at a time (calling a stored procedure 4000
times!). I am interesting in knowing any way to improve this. The parameter
translation alone on the procedure calls takes 40 seconds for 4000 records.
I am using exec in DB-Lib.

Would RPC or CT-Lib be better/faster?

A netter responded stating that it was faster to ditch their algorithm and to
apply a set based strategy:

The way to take your approach is to convert the row at a time processing
(which is more traditional type of thinking) into a batch at a time (which
is more relational type of thinking). Now I'm not trying to insult you to
say that you suck or anything like that, we just need to dial you in to
think in relational terms.

The idea is to do batches (or bundles) of rows rather than processing a
single one at a time.

So let's take your example (since you didn't give exact values [probably
out of kindness to save my eyeballs] I'll use your generic example to
extend what I'm talking about):


if exists (select record) then
update record
insert record

New way:
1. Load all your rows into a table named new_stuff in a separate work
database (call it work_db) and load it using bcp -- no third GL needed.
1. truncate new_stuff and drop all indexes
2. sort your data using UNIX sort and sort it by the clustered columns
3. load it using bcp
4. create clustered index using with sorted_data and any ancillary
non-clustered index.
2. Assuming that your target table is called old_stuff
3. Do the update in a single batch:
begin tran

/* delete any rows in old_stuff which would normally
** would have been updated... we'll insert 'em instead!
** Essentially, treat the update as a delete/insert.

delete old_stuff
from old_stuff,
where old_stuff.key = new_stuff.key

/* insert entire new table: this adds any rows
** that would have been updated before and
** inserts the new rows
insert old_stuff
select * from new_stuff

commit tran

You can do all this without writing 3-GL, using bcp and a shell script.

A word of caution:

Since these inserts/updates are batched orientated you may blow your
log if you attempt to do too many at a time. In order to avoid this use
the set rowcount directive to create bite-size chunks.

Back to top


6.1.2: When should I execute an sp_recompile?


An sp_recompile should be issued any time a new index is added or an update
statistics. Dropping an index will cause an automatic recompile of all objects
that are dependent on the table.

The sp_recompile command simply increments the schemacnt counter for the given
table. All dependent object counter's are checked against this counter and if
they are different the SQL Server recompiles the object.

Back to top


6.1.3: What are the different types of (All Page) locks?


First off, just to get it out of the way, Sybase does now support row level
locking! (See Q6.1.11 for a description of the new features.) OK, that said and
sone, if you think you need row level locking, you probably aren't thinking set
based -- see Q6.1.1 for set processing.

The SQL Server uses locking in order to ensure that sanity of your queries.
Without locking there is no way to ensure the integrity of your operation.
Imagine a transaction that debited one account and credited another. If the
transaction didn't lock out readers/writers then someone can potentially see
erroneous data.

Essentially, the SQL Server attempts to use the least intrusive lock possible,
page lock, to satisfy a request. If it reaches around 200 page locks, then it
escalates the lock to a table lock and releases all page locks thus performing
the task more efficiently.

There are three types of locks:

* page locks
* table locks
* demand locks

Page Locks

There are three types of page locks:

* shared
* exclusive
* update


These locks are requested and used by readers of information. More than one
connection can hold a shared lock on a data page.

This allows for multiple readers.


The SQL Server uses exclusive locks when data is to be modified. Only one
connection may have an exclusive lock on a given data page. If a table is large
enough and the data is spread sufficiently, more than one connection may update
different data pages of a given table simultaneously.


A update lock is placed during a delete or an update while the SQL Server is
hunting for the pages to be altered. While an update lock is in place, there
can be shared locks thus allowing for higher throughput.

The update lock(s) are promoted to exclusive locks once the SQL Server is ready
to perform the delete/update.

Table Locks

There are three types of table locks:

* intent
* shared
* exclusive


Intent locks indicate the intention to acquire a shared or exclusive lock on a
data page. Intent locks are used to prevent other transactions from acquiring
shared or exclusive locks on the given page.


This is similar to a page level shared lock but it affects the entire table.
This lock is typically applied during the creation of a non-clustered index.


This is similar to a page level exclusive lock but it affects the entire table.
If an update or delete affects the entire table, an exclusive table lock is
generated. Also, during the creation of a clustered index an exclusive lock is

Demand Locks

A demand lock prevents further shared locks from being set. The SQL Server sets
a demand lock to indicate that a transaction is next to lock a table or a page.

This avoids indefinite postponement if there was a flurry of readers when a
writer wished to make a change.

Back to top


6.1.4: What's the purpose of using holdlock?


All select/readtext statements acquire shared locks (see Q6.1.3) to retrieve
their information. After the information is retrieved, the shared lock(s) is/
are released.

The holdlock option is used within transactions so that after the select/
readtext statement the locks are held until the end of the transaction:

* commit transaction
* rollback transaction

If the holdlock is not used within a transaction, the shared locks are


Assume we have the following two transactions and that each where-clause
qualifies a single row:

tx #1

begin transaction
/* acquire a shared lock and hold it until we commit */
1: select col_1 from table_a holdlock where id=1
2: update table_b set col_3 = 'fiz' where id=12
commit transaction

tx #2

begin transaction
1: update table_a set col_2 = 'a' where id=1
2: update table_c set col_3 = 'teo' where id=45
commit transaction

If tx#1, line 1 executes prior to tx#2, line 1, tx#2 waits to acquire its
exclusive lock until tx#1 releases the shared level lock on the object. This
will not be done until the commit transaction, thus slowing user throughput.

On the other hand, if tx#1 had not used the holdlock attribute, tx#2 would not
have had to wait until tx#1 committed its transaction. This is because shared
level locks are released immediately (even within transactions) when the
holdlock attribute is not used.

Note that the holdlock attribute does not stop another transaction from
acquiring a shared level lock on the object (i.e. another reader). It only
stops an exclusive level lock (i.e. a writer) from being acquired.

Back to top


6.1.6: How do I find the oldest open transaction?

select h.spid,, p.cmd,, h.starttime,
p.hostname, p.hostprocess, p.program_name
from master..syslogshold h,
master..sysprocesses p,
master..sysusers u
where h.spid = p.spid
and p.suid = u.suid
and h.spid != 0 /* not replication truncation point */

Back to top


6.1.7: How do I check if log truncation is blocked?


System 11 and beyond:

select h.spid, convert(varchar(20),, h.starttime
from master..syslogshold h,
sysindexes i
where h.dbid = db_id()
and h.spid != 0
and = 8 /* syslogs */
and in (i.first, i.first+1) /* first page of log = page of oldest xact */

Back to top


6.1.8: The timestamp datatype


The timestamp datatype is user-defined datatype supplied by Sybase, defined as:

varbinary(8) NULL

It has a special use when used to define a table column. A table may have at
most one column of type timestamp, and whenever a row containing a timestamp
column is inserted or updated the value in the timestamp column is
automatically updated. This much is covered in the documentation.

What isn't covered is what the values placed in timestamp columns actually
represent. It is a common misconception that timestamp values bear some
relation to calendar date and/or clock time. They don't - the datatype is
badly-named. SQL Server keeps a counter that is incremented for every write
operation - you can see its current value via the global variable @@DBTS
(though don't try and use this value to predict what will get inserted into a
timestamp column as every connection shares the same counter.)

The value is maintained between server startups and increases monotonically
over time (though again you cannot rely on it this behaviour). Eventually the
value will wrap, potentially causing huge problems, though you will be warned
before it does - see Sybase Technical News Volume 5, Number 1 (see Q10.3.1).
You cannot convert this value to a datetime value - it is simply an 8-byte

Note that the global timestamp value is used for recovery purposes in the
event of an RDMBS crash. As transactions are committed to the log each
transaction gets a unique timestamp value. The checkpoint process places a
marker in the log with its unique timestamp value. If the RDBMS crashes,
recovery is the process of looking for transactions that need to be rolled
forward and/or backward from the checkpoint event. If a transaction spans
across the checkpoint event and it never competed it too needs to be rolled

Essentially, this describes the write-ahead log protocol described by C.J.
Date in An Introduction to Database Systems.

So what is it for? It was created in order to support the browse-mode functions
of DB-Library (and for recovery as mentioned above). This enables an
application to easily support optimistic locking (See Q1.5.4) by guaranteeing a
watch column in a row will change value if any other column in that row is
updated. The browse functions checked that the timestamp value was still the
same as when the column was read before attempting an update. This behaviour is
easy to replicate without necessarily using the actual client browse-mode
functions - just read the timestamp value along with other data retrieved to
the client, and compare the stored value with the current value prior to an

Back to top


6.1.9: Stored Procedure Recompilation and Reresolution


When a stored procedure is created, the text is placed in syscomments and a
parse tree is placed in sysprocedures. At this stage there is no compiled query

A compiled query plan for the procedure only ever exists in memory (that is, in
the procedure cache) and is created under the following conditions:

1. A procedure is executed for the first time.
2. A procedure is executed by a second or subsequent user when the first plan
in cache is still in use.
3. The procedure cache is flushed by server restart or cache LRU flush
4. The procedure is executed or created using the with recompile option.

If the objects the procedure refers to change in some way - indexes dropped,
table definition changed, etc - the procedure will be reresolved - which
updates sysprocedures with a modified tree. Before 10.x the tree grows and in
extreme cases the procedure can become too big to execute. This problem
disappears in Sybase System 11. This reresolution will always occur if the
stored procedure uses temporary tables (tables that start with "#").

There is apparently no way of telling if a procedure has been reresolved.

Traceflag 299 offers some relief, see Q1.3.3 for more information regarding

The Official Explanation -- Reresolution and Recompilation Explained

When stored procedures are created, an entry is made in sysprocedures that
contains the query tree for that procedure. This query tree is the resolution
of the procedure and the applicable objects referenced by it. The syscomments
table will contain the actual procedure text. No query plan is kept on disk.
Upon first execution, the query tree is used to create (compile) a query plan
(execution plan) which is stored in the procedure cache, a server memory
structure. Additional query plans will be created in cache upon subsequent
executions of the procedure whenever all existing cached plans are in use. If a
cached plan is available, it will be used.

Recompilation is the process of using the existing query tree from
sysprocedures to create (compile) a new plan in cache. Recompilation can be
triggered by any one of the following:

* First execution of a stored procedure,
* Subsequent executions of the procedure when all existing cached query plans
are in use,
* If the procedure is created with the recompile option, CREATE PROCEDURE
* If execution is performed with the recompile option, EXECUTE sproc WITH

Re-resolution is the process of updating the query tree in sysprocedures AND
recompiling the query plan in cache. Re-resolution only updates the query tree
by adding the new tree onto the existing sysprocedures entry. This process
causes the procedure to grow in size which will eventually cause an execution
error (Msg 703 - Memory request failed because more than 64 pages are required
to run the query in its present form. The query should be broken up into
shorter queries if possible). Execution of a procedure that has been flagged
for re-resolution will cause the re-resolution to occur. To reduce the size of
a procedure, it must be dropped which will remove the entries from
sysprocedures and syscomments. Then recreate the procedure.

Re-resolution can be triggered by various activities most of which are
controlled by SQL Server, not the procedure owner. One option is available for
the procedure owner to force re-resolution. The system procedure, sp_recompile,
updates the schema count in sysobjects for the table referenced. A DBA usually
will execute this procedure after creating new distribution pages by use of
update statistics. The next execution of procedures that reference the table
flagged by sp_recompile will have a new query tree and query plan created.
Automatic re-resolution is done by SQL Server in the following scenarios:

* Following a LOAD DATABASE on the database containing the procedure,
* After a table used by the procedure is dropped and recreated,
* Following a LOAD DATABASE of a database where a referenced table resides,
* After a database containing a referenced table is dropped and recreated,
* Whenever a rule or default is bound or unbound to a referenced table.

Forcing automatic compression of procedures in System 10 is done with trace
flag 241. System 11 should be doing automatic compression, though this is not

When are stored procedures compiled?

Stored procedures are in a database as rows in sysprocedures, in the form of
parse trees. They are later compiled into execution plans.

A stored procedures is compiled:

1. with the first EXECute, when the parse tree is read into cache
2. with every EXECute, if CREATE PROCEDURE included WITH RECOMPILE
3. with each EXECute specifying WITH RECOMPILE
4. if the plans in cache for the procedure are all in use by other processes
5. after a LOAD DATABASE, when all procedures in the database are recompiled
6. if a table referenced by the procedure can not be opened (using object id),
when recompilation is done using the table's name
7. after a schema change in any referenced table, including:
1. CREATE INDEX or DROP INDEX to add/delete an index
2. ALTER TABLE to add a new column
3. sp_bindefault or sp_unbindefault to add/delete a default
4. sp_bindrule or sp_unbindrule to add/delete a rule
8. after EXECute sp_recompile on a referenced table, which increments
sysobjects.schema and thus forces re-compilation

What causes re-resolution of a stored procedure?

When a stored procedure references an object that is modified after the
creation of the stored procedure, the stored procedure must be re-resolved.
Re-resolution is the process of verifying the location of referenced objects,
including the object id number. Re-resolution will occur under the following

1. One of the tables used by the stored procedure is dropped and re-created.
2. A rule or default is bound to one of the tables (or unbound).
3. The user runs sp_recompile on one of the tables.
4. The database the stored procedure belongs to is re-loaded.
5. The database that one of the stored procedure's tables is located in is
6. The database that one of the stored procedure's tables is located in is
dropped and re-created.

What will cause the size of a stored procedure to grow?

Any of the following will result in a stored procedure to grow when it is

1. One of the tables used in the procedure is dropped and re-created.
2. A new rule or default is bound to one of the tables or the user runs
sp_recompile on one of the tables.
3. The database containing the stored procedure is re-loaded.

Other things causing a stored procedure to be re-compiled will not cause it to
grow. For example, dropping an index on one of the tables used in the procedure

The difference is between simple recompilation and re-resolution. Re-resolution
happens when one of the tables changes in such a way that the query trees
stored in sysprocedures may be invalid. The datatypes, column offsets, object
ids or other parts of the tree may change. In this case, the server must
re-allocate some of the query tree nodes. The old nodes are not de-allocated
(there is no way to do this within a single procedure header), so the procedure
grows. In time, trying to execute the stored procedure will result in a 703
error about exceeding the 64 page limit for a query.

Back to top


6.1.10: How do I manipulate varbinary columns?


The question was posed - How do we manipulate varbinary columns, given that
some portion - like the 5th and 6th bit of the 3rd byte - of a (var)binary
column, needs to be updated? Here is one approach, provided by Bret Halford (, using stored procedures to set or clear certain bits of a
certain byte of a field of a row with a given id:

drop table demo_table
drop procedure clear_bits
drop procedure set_bits
create table demo_table (id numeric(18,0) identity, binary_col
insert demo_table values (0xffffffffffffffffffffffffffffffffffffffff)
insert demo_table values (0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa)
insert demo_table values (0x0000000000000000000000000000000000000000)

create procedure clear_bits (
@id numeric(18,0), -- primary key of row to be changed
@bytenum tinyint, -- specifies which byte of binary_col to change
@mask binary(1) -- bits to be cleared are zeroed,
-- bits left alone are turned on
-- so 0xff = clear all, 0xfb = clear bit 3
update demo_table set binary_col =
convert(tinyint,substring(binary_col,@bytenum,1)) &
from demo_table
where id = @id

create procedure set_bits (
@id numeric(18,0), -- primary key of row to be changed
@bytenum tinyint, -- specifies which byte of binary_col to change
@mask binary(1)) -- bits to be set are turned on
-- bits left alone are zeroed
-- so 0xff = set all, 0xfb = set all but 3
update demo_table set binary_col =
convert(tinyint,substring(binary_col,@bytenum, 1)) |
from demo_table
where id = @id

select * from demo_table
-- clear bits 2,4,6,8 of byte 1 of row 1
exec clear_bits 1,1,0xAA

-- set bits 1-8 of byte 20 of row 3
exec set_bits 3,20,0xff

-- clear bits 1-8 of byte 4 of row 2
exec clear_bits 2,4,0xff

-- clear bit 3 of byte 5 of row 2
exec clear_bits 2,5,0x08
exec clear_bits 2,6,0x0f
exec set_bits 2,10,0xff

select * from demo_table

Back to top


6.1.11: How do I remove duplicate rows from a table?


There are a number of different ways to achieve this, depending on what you are
trying to achieve. Usually, you are trying to remove duplication of a certain
key due to changes in business rules or recognition of a business rule that was
not applied when the database was originally built.

Probably the quickest method is to build a copy of the original table:

select *
into temp_table
from base_table
where 1=0

Create a unique index on the columns that covers the duplicating rows with the
ignore_dup_key attribute. This may be more columns that the key for the table.

create unique index temp_idx
on temp_table(col1, col2, ..., colN)
with ignore_dup_key

Now, insert base_table into temp_table.

insert temp_table
select * from base_table

You probably want to ensure you have a very good backup of the base_table at
this point, coz your going to clear it out! You will also want to check to
ensure that the temp_table includes the rows you need. You also need to ensure
that there are no triggers on the base table (remember to keep a copy!) or RI
constraints. You probably do not want any of these to fire, or if they do, you
are aware of the implications.

Now you have a couple of choices. You can simply drop the original table and
rename the temp table to the same name as the base table. Alternatively,
truncate the table and insert from the temp_table into the original table. You
would need to do this last if you did need the RI to fire on the table etc. I
suspect that in most cases dropping and renaming will be the best option.

If you want to simply see the duplicates in a table, the following query will

select key1, key2, ...
from base_table
group by key1, key2, key3, key4, ...
having count(*) > 1

Sybase will actually allow a "select *", but it is not guaranteed to work.

Back to top


SQL Advanced bcp ASE FAQ

David Owen

Jan 21, 2004, 4:26:08 AM1/21/04
Archive-name: databases/sybase-faq/part15

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Open Client

7.1 What is Open Client?
7.2 What is the difference between DB-lib and CT-lib?
7.3 What is this TDS protocol?
7.4 I have upgraded to MS SQL Server 7.0 and can no longer connect from
Sybase's isql.
7.5 The Basics of Connecting to Sybase
7.6 Connecting to ASE using ODBC
7.7 Which version of Open Client works with which ASE?
7.8 How do I tell the version of Open Client I am running?

Freeware Useful SQL Tricks ASE FAQ


7.1: What is Open Client?


Open Client is the interface (API) between client systems and Sybase servers.
Fundamentally, it comes in two forms:


The runtime version is a set of dynamic libraries (dlls on W32 platforms) that
allow client applications to connect to Sybase and Microsoft servers, or, in
fact, any server that implements the Tabular Data Streams (TDS) protocol. You
need some form of Open Client in order to be able to connect to ASE in any way,
shape or form. Even if you are running isql on exactly the same machine as
ASE itself, communication will still be via Open Client. That is not to say
that client to server communication on the same machine will go via the
physical network, that decision is left entirely to the protocol implementation
on the machine in question.


The development version contains all of the libraries from the runtime version,
plus the header files and other files, library files etc, that enable
developers to build client apps that are able to connect to Sybase servers.

Back to top


7.2: What is the difference between DB-lib and CT-lib?


Both DB-lib and CT-lib are libraries that implement the TDS protocol from the
client side.


DB-lib was Sybase's first version. It was a good first attempt, but has/had a
number of inconsistencies. There are, or possibly were, a lot of applications
written using DB-lib. If you are about to start a new Open Client development,
consider using CT-lib, it is the preferred choice. (What version of TDS does
DB-lib, is it only 4.2?)

Having said that you should use CT-lib for new developments, there is one case
that this may not be true for and that is 2 phase commit. 2 phase commit is
supported directly by DB-lib but is not supported directly by CT-lib.


CT-lib is a completely re-written version of Open Client that was released in
the early '90s. The API is totally different from DB-lib, and is much more
consistent. Applications written using DB-lib cannot simply be compiled using
CT-lib, they need a significant amount of porting effort. CT-lib is newer,
more consistent and, in several people's opinions, including mine, slightly
longer winded. Having said that, the future of DB-lib is uncertain and is
certainly not being developed any more, as a result all new apps should be
written using CT-lib.

Back to top


7.3: What is this TDS protocol?


Tabular Data Streams or TDS is the name given to the protocol that is used to
connect Sybase clients with Sybase servers. A specification for the protocol
can be obtained from Sybase, I had a copy but cannot seem to find it now.

The is a project that is reverse engineering the protocol and building a set of
libraries independent of either Sybase or Microsoft, but able to connect to
either of their servers. FreeTDS is a considerable way down the line, although
I do not believe that it is production ready yet!

As part of the project, they have started to document the protocol, and a view
of TDS 5.0 can be seen here.

Back to top


7.4: I have upgraded to MS SQL Server 7.0 and can no longer connect from
Sybase's isql.


Microsoft SQL Server has always supported the TDS protocol, and up to release 7
it has been the primary means of communication between clients and servers.
With release 7, TDS has been reduced to being a "legacy" protocol. (I do not
know what the communication protocol/mechanism with release 7 is, you will need
to talk to someone from Microsoft or search .)

In order to connect to MS Sql Server 7 using Sybase's Open Client you will need
to install Service Pack 2 of SQL Server 7, available from http://

Back to top


7.5: The Basics of Connecting to Sybase


The following describes how to connect to Sybase ASE on a UNIX machine from a
windows client with isql etc. The specific example is Sybase ASE 11.9 on
Redhat Linux 6.1, using Windows 95 and NT. (Have both on partitions and the
process was the same... This is not a technical review or an in-depth
discussion (there are people far more qualified than me for that ;-) ). Rather
it is more along the lines of "This is how I managed it, it should work for
you". As always there are no guarantees, so it if goes wrong, it's your fault

The starting point for this discussion has to be, you've downloaded (or
whatever means you used to acquire it) both Sybase ASE for Linux and the PC
Client software (a big zip file) and are ready to install. I'm not going to
discuss the install process as Sybase managed to do a good job of that, so
I'm leaving well alone. The bit you have to take notice of is when you run
srvbuild. This should happen the first time you log on as the user sybase after
the install. If it doesn't then you can run it by hand after, it line in the
$SYBASE directory under bin. The reason why I'm mentioning this is that
srvbuild defaults to installing your database using the name "localhost". Now
the problem with localhost is that it is kind of a special case and would mean
that you could not connect to your database from anywhere other that the server
itself. This would defeat the object of this
discussion, so simply name it something else, bob, george, albert, mydatabase,
whatever, the choice is yours.

Having done this (it takes a while to complete) you should now have a running
database. so try to connect to it on the local machine with something like isql
-SServerName -Usa, (where ServerName is whatever you called it when you ran
srvbuild) when it asks for a password, just press enter and you should be
greeted by the monumentous welcome


Not a lot for all the work you have done to get to this point, but you've
connected to your database and that's the main thing. This is very important as
not only does this mean that your database is working, but it also means that
the server half of Open Client is working. This is because even isql on the
server connects to the database using Open Client and you've just proved it
works, cool. Next run dsedit on the server and make a note of the following 3

1: The server name
2: The IP address
3: The port

Your going to need these to get connected from windows.

Now switch to you windows machine, did I remember to tell you to shut down
dsedit on the server?, consider it said ;-). Unpack the PC Client software zip
file and install it using the instructions that came with it. They worked fine
for me and I'm an idiot, so they should work for you. When you've finished, go
to the start menu and start dsedit (on my machine it's under programs ->
sybase). When it runs, it begins with a dialog asking you which Interface
driver to open, I've done this 3 times and went with the default every time, so
it should be a safe bet. At this point you can now add your Linux based server.
Select the menu item serverobject->add. Then enter the name of the server you
just got from your Linux box, in the field labeled "server". It is probably a
good idea that it is the same name you got from your Linux based dsedit to
ensure that everyone is referring to the same server with the same name.
Prevents confusion. This then opens a new window with several fields, one of
which is the server name you just entered. The bottom field is the bit where
you enter the "nitty gritty", the server IP address and port. To do this right
click on the field and select "modify attribute" to open the server address
dialog. When this new dialog opens click add to open yet another dialog (is
there an award for the most gratuitous use of the word dialog???). OK, this is
the last one, honest. Leave the drop down list where it is (hopefully showing
TCP/IP or something similar). Instead move straight to the address field and
enter the following: the Linux servers IP address followed by the port number
(the one from the server dsedit), separated by a comma. On my machine it looks
like this.,2501

Now you can "OK" your way back out of the dialogs, back up to where you started
from and exit dsedit. Then launch isql on the windows box and log in.
Personally I did this from a DOS prompt, using exactly the same syntax I did on
the Linux box, but that's just because I like it that way. Now you should be
happily querying you Linux (or other UNIX for that matter) based Sybase ASE
database. What you do with it now, is covered elsewhere in this FAQ from people
able to tell you, unlike me. Now just one more time for good measure, I'm going
to type the word, wait for it.... Dialog.

Back to top


7.6: Connecting to ASE Using OLTP


To begin with you need to be certain that you can connect to your Linux hosted
Sybase ASE database from your windows based machine. Do this by running isql
from your Linux box and connect to the database, if this works, then your all
set (See Q7.5). You will need the Sybase ODBC driver, this came with the PC
Client package. If you got your Windows Open Client software through some other
means, then you may need to down load the ODBC driver, this will become
apparent later. Right, begin by launching the 32 bit ODBC administrator, either
from the Sybase menu under start -> programs or the control panel. Ensure that
you are displaying the "user DSN" section (by clicking on the appropriate tab).

You can then click on the button labeled add to move to the driver selection
dialog. Select Sybase System 11 and click on finish. You will by now have
noticed that this is Microsoft's way of taunting you and you haven't actually
finished yet, you're actually at the next dialog. What you have actually done
is told windows that you are now about to configure your Sybase ODBC driver.
There are 4 boxes on the dialog with which you are now presented, and they are:

Data Source Name
Server Name
Database Name

The data source name is the Server name from your interfaces file on your Linux
server. If you are uncertain of any of these values, then log onto your Linux
box, run dsedit and take a look. It will only take you 2 minutes and much
easier than debugging it later. The description field is irrelevant and you can
put anything in there that is meaningful to you. Server name is the IP address
of the Linux server, that is hosting your database. Database name is the name
of a database to which you want to connect, once your Sybase connection has
been established. If in doubt, you can stick master in there for now, at least
you'll get a connection. Now you can click on OK to get back to the starting
screen, followed by another OK to exit ODBC administrator. We will now test the
connection by running Sybase Central. I chosen this because I figure that if
you downloaded the PC Client package, then I know you've got it (at least I'm
fairly sure). When you launch Sybase administrator from start->programs->
Sybase, you are presented with a connection dialog. There are 3 fields in this

User ID
Server Name

In the field labeled UserID, you can type in sa. If you've been doing some work
on Sybase through other means and you have already created a valid user, then
you can use him (her, it, whatever). In the password field, type in the
appropriate password. Assuming you have changed nothing from the
original Sybase install and you are using sa, then you will leave this blank.
The final field is a dropdown list box containing all the Sybase remote
connections you have. Assuming you only have the one, then you can leave this
alone. If you have more than one, stick to the one that you know works for now
and that allows access to the user you've used. In simple English (and if you
don't speak English, then I hope somebody has translated it :-) ). If this is a
clean install and you have altered nothing after following the instruction
earlier to establish an Open Client, then the top box should contain simply
"sa", the middle box should be blank, and the bottom list-box should contain
whatever the servername is in your Linux based interfaces file. Clicking on OK
will now connect Sybase Central to the database and "away you go"...

Hope this is of some assistance to you, but if you run into problems then I
suggest you post to the newsgroup, which is where the real experts hang out. I
am unlikely to be able to help you, as I have simply noted down my experiences
as I encountered them, in the hope they may help somebody out.
I take no responsibility for anything, including any result of following the
instructions in this text.
Good luck...


Back to top


7.7: Which version of Open Client works with which ASE?


The TDS protocol that *is* Open Client is built so that either the client or
server will fallback to a common dialect. I suppose that it is theoretically
possible that both would fallback for some reason, but it seems unlikely. I was
recently working with a client that was using Open/Client 4.2 to speak to a
version 11.5 ASE using Powerbuilder 3 and 4! Amazing, it all worked! The main
problem that you will encounter is not lack of communication but lack of
features. The facility to bcp out of views was added to the 11.1.1 release. You
will still be able to connect to servers with old copies of Open/Client, you
just won't have all of the features.

There is also another fairly neat feature of the later releases of Open/Client,
it has a very good compatibility mode for working with old applications. The
client that was running Open/Client 4.2 with Powerbuilder 3 is now connecting
to the database using version 11.1.1. Which is not bad when you remember that
Powerbuilder 3 only talked 4.2 DBLib!

Back to top


7.8: How do I tell the version of Open Client I am running?



isql -v

from the command line, will return a string like:

Sybase CTISQL Utility/11.1.1/P-EBF7729/PC Intel/1/ OPT/Thu Dec 18 01:05:29 1997

The 11.1.1 part represents the version number.

Back to top


Freeware Useful SQL Tricks ASE FAQ

David Owen

Jan 21, 2004, 4:26:08 AM1/21/04
Archive-name: databases/sybase-faq/part16

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


Sybase Tech Docs Open Client ASE FAQ

The best place to search for Sybase freeware is Ed Barlow ('s
site ( He is likely to spend more time maintaining
his list than I will spend on this. I will do my best!

9.3.4 - converts interfaces file to tli
9.3.5 - Module to transfer data between two servers.
9.3.6 - realtime process and lock monitor
9.3.7 - shows the servers on a particular machine in a
nice format.
9.3.8 Collection of Perl Scripts


9.4.1 Sybtcl - TCL interface to Sybase.
9.4.2 sybdump - a Tcl script for dumping a database schema to disk
9.4.3 wisql - graphical sql editor and more


9.5.1 Sybase Module for Python.

Tools, Utilities and Packages

9.6.1 sqsh - a superset of dsql with local variables, redirection,
pipes and all sorts of goodies.
9.6.2 lightweight Sybase Access via Win95/NT
9.6.3 BCPTool - a utility for trasferring data from ASE to another
(inc. native port to Linux).

'Free' Versions of ASE

The next couple of questions will move to the OS section (real) soon.

9.7.1 How to access a SQL Server using Linux see also Q11.4.6
9.7.2 Sybase on Linux Linux Penguin
9.7.3 How to configure shared-memory for Linux
9.7.4 Sybase now available on Free BSD

Other Sites of Interest

9.8.1 Ed Barlow's collection of Stored Procedures.

9.8.2 Examples of Open Client and Open Server programs -- see Q11.4.14
9.8.3 xsybmon - an X interface to sp_monitor

Sybase Tech Docs Open Client ASE FAQ


9.0: Where is all the code and why does Section 9 suddenly load in a reasonable
amount of time?


This section was in need of a spring clean, and it has now had it. I have
tested all of the stored procs included here against all versions of Sybase
that I have to hand. (, 11.9.2 and 12.5 on Linux, 11.9.2 and 12 on
Solaris and 11.9.2 and 12 on NT.) If Pablo or the supplier documented that he
had tested it on other versions, then I have included those comments. Just
remember that I did not test them on anything pre- If you are still
using them on a pre- release (I know of at least one place that is
still running 4.9.2!) then let me know and I will add a suitable comment.

I have actually taken the code away and built a set of packages. First and
foremost is the stored proc package, then there is a shell script package, a
perl package and finally there is the archive package, which contains any stuff
specific to non-current releases of ASE.

In addition to wrenching out the code I have added some samples of the output
generated by the scripts. It occurred to me that people will be better able to
see if the stored proc does what they want if they can see what it produces.

Finally, part of the reason that this is here is so that people can examine the
code and see how other people write stored procs etc. Each stored proc is in a
file of its own so that you can choose which ones you wish to browse on-line
and then cut and paste them without having to go through the hassle of
un-htmling them.

Back to top

9.1.1: sp_freedevice


This script displays the size of the devices configured for a server, together
with the free and used allocations.

Get it as part of the bundle (zip or tarball) or individually from here.


[30] BISCAY.master.1> sp_freedevice
[30] BISCAY.master.2>> go
total used free
--------------------- --------------------- ---------------------
950.00 MB 750.00 MB 200.00 MB

(1 row affected)
devname size used free
------------------------------ --------------------- --------------------- ---------------------
db01 100.00 MB 72.00 MB 28.00 MB
db02 100.00 MB 0.00 MB 100.00 MB
log01 100.00 MB 51.00 MB 49.00 MB
master 50.00 MB 27.00 MB 23.00 MB
sysprocsdev 200.00 MB 200.00 MB 0.00 MB
tlg01 200.00 MB 200.00 MB 0.00 MB
tmp01 200.00 MB 200.00 MB 0.00 MB

(7 rows affected, return status = 0)
[31] BISCAY.master.1>

Back to top


9.1.2: sp_dos


sp_dos displays the scope of an object within a database. What tables it
references, what other procedures it calls etc. Very useful for trying to
understand an application that you have just inherited.

Get it as part of the bundle (zip or tarball) or individually from here.

The output looks like this:

1> sp_dos sp_helpkey
2> go

** Utility by David Pledger, Strategic Data Systems, Inc. **
** PO Box 498, Springboro, OH 45066 **

(P) sp_helpkey
+--(S) sysobjects
+--(S) syskeys
+--(P) sp_getmessage
+--(S) sysusermessages
+--(P) sp_validlang

(return status = 0)

Back to top


9.1.3: sp_whodo


Sybase System 10.x and above

sp_whodo is an enhanced version of sp_who, with cpu and io usage for each user.
Note that this proc is now a little out of date since Sybase introduced the fid
column, so subordinate threads are unlikely to be grouped with their parent.

Get it as part of the bundle (zip or tarball) or individually from here.


1> sp_whodo
2> go
spid status loginame hostname blk blk_sec program
dbname cmd cpu io tran_name
------ ------------ ------------ ---------- --- ------- ----------------
------- ---------------- ------ ------- ----------------
2 sleeping NULL 0 0
4 sleeping NULL 0 0
master DEADLOCK TUNE 0 0
5 sleeping NULL 0 0
6 sleeping NULL 0 0 <astc>
master ASTC HANDLER 0 0
7 sleeping NULL 0 0
8 sleeping NULL 0 0
master HOUSEKEEPER 0 33
17 running sa n-utsire.m 0 0 ctisql
master SELECT 0 1

(7 rows affected)

Back to top


9.1.4: sp__revroles


Well, I cannot get this one to do what it is supposed to, I am not sure if it
is just that it was written for a different release of Sybase and 11.9.2 and
above has changed the way that roles are built, or what. Anyway, I may work on
it some more.

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top


9.1.5: sp__rev_configure


This proc reverse engineers the configure settings. It produces a set of calls
to sp_configure for those values that appear in syscurconfigs. I am not sure
how relevant this is with the ability to save and load the config file.

Get it as part of the bundle (zip or tarball) or individually from here.

The output is as follows, however, I have edited away some of the values since
my list was considerably longer than this.

-- sp_configure settings
sp_configure 'recovery interval', 5

sp_configure 'allow updates', 0

sp_configure 'user connections', 25
sp_configure 'memory', 14336
sp_configure 'default character set id', 2
sp_configure 'stack size', 65536
sp_configure 'password expiration interval', 0
sp_configure 'audit queue size', 100
sp_configure 'additional netmem', 0
sp_configure 'default network packet size', 512
sp_configure 'maximum network packet size', 512
sp_configure 'extent i/o buffers',
sp_configure 'identity burning set factor', 5000
sp_configure 'size of auto identity', 10
sp_configure 'identity grab size', 1
sp_configure 'lock promotion threshold', 200

(41 rows affected)
(return status = 0)

Back to top


9.1.6: sp_servermap


A one stop shop for a quick peek at everything on the server.

Get it as part of the bundle (zip or tarball) or individually from here.

The output for a brand new ASE on Linux server is as follows:

Current Date/Time
------------------------------ --------------------------
TRAFALGAR Jan 14 2001 1:48PM



SQL Server/ ESD#6/P-FREE/Linux Intel/Linux 2.2.14 i686/1/OPT/Fri Mar 17 15:45:30 CET 2000

db dbid segmap segs device fragment start (pg) size (MB)
--------------- ------ ----------- ---- --------------- ----------- ---------
master 1 7 LDS master 4 3.00
master 1 7 LDS master 3588 2.00
tempdb 2 7 LDS master 2564 2.00
model 3 7 LDS master 1540 2.00
sybsystemprocs 4 7 LDS sysprocsdev 16777216 150.00
sybsecurity 5 15 ULDS sybsecurity 33554432 300.00

Segment Codes:
U=User-defined segment on this device fragment
L=Database Log may be placed on this device fragment
D=Database objects may be placed on this device fragment by DEFAULT
S=SYSTEM objects may be placed on this device fragment

db dbid size (MB) db status codes created
dump tran
--------------- ------ --------- ------------------ ---------------
master 1 5.00 01 Jan 00 00:00
07 Jan 01 04:01
tempdb 2 2.00 A 14 Jan 01 13:46
14 Jan 01 13:47
model 3 2.00 01 Jan 00 00:00
07 Jan 01 03:38
sybsystemprocs 4 150.00 B 07 Jan 01 03:32
14 Jan 01 13:43
sybsecurity 5 300.00 B 07 Jan 01 04:01
07 Jan 01 04:55

Status Code Key

Code Status
---- ----------------------------------
A select into/bulk copy allowed
B truncate log on checkpoint
C no checkpoint on recovery
D db in load-from-dump mode
E db is suspect
F ddl in tran
G db is read-only
H db is for dbo use only
I db in single-user mode
J db name has been changed
K db is in recovery
L db has bypass recovery set
M abort tran on log full
N no free space accounting
O auto identity
P identity in nonunique index
Q db is offline
R db is offline until recovery completes

device fragment start (pg) size (MB) db lstart segs
--------------- ----------- --------- --------------- ----------- ----
master 4 3.00 master 0 LDS
master 1540 2.00 model 0 LDS
master 2564 2.00 tempdb 0 LDS
master 3588 2.00 master 1536 LDS
sybsecurity 33554432 300.00 sybsecurity 0 ULDS
sysprocsdev 16777216 150.00 sybsystemprocs 0 LDS

Segment Codes:
U=USER-definedsegment on this device fragment
L=Database LOG may be placed on this device fragment
D=Database objects may be placed on this device fragment by DEFAULT
S=SYSTEM objects may be placed on this device fragment

device vdevno default disk? total (MB) used free
--------------- ------ ------------- ---------- ------- -------
master 0 Y 100.00 9.00 91.00
sysprocsdev 1 N 150.00 150.00 0.00
sybsecurity 2 N 300.00 300.00 0.00

device location
--------------- ------------------------------------------------------------
master d_master
sybsecurity /d/TRAFALGAR/3/sybsecur.dat
sysprocsdev /d/TRAFALGAR/2/sybprocs.dat

(return status = 0)

Back to top


9.1.7: sp__create_crosstab


Hmmm... not quite sure about this one. Was not 100% sure about how to set it
up. From the description it builds a cross tab query. If someone knows how to
use this, then let me know how to set it up and I will improve the description
here and provide some output.

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top


9.1.8: sp_ddl_create_table


Well, you all know what a create table statement looks like... This produces
the table definitions in their barest form (lacking in constraints etc) and the
resulting DDL is perhaps not as elegant as some other utilities, but far be it
from me to blow dbschema's trumpet :-), but it is worth a look just for the
query. The layout of the carriage returns being embedded within strings is

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top


9.1.9: sp_spaceused_table



In environment where there are a lot of temporary tables #x being created, how
do you tell who is using how much space ? The answer is sp_spaceused_table,
which basically lists the tables in a database with rowcount and space usage
statistics. I have replaced the original proc with K-shell script for a single
proc. I think that it is easier to compare if it is all in one listing.
However, if you disagree I will add the original code to the archive package,
just let me know.

Get it as part of the bundle (zip or tarball) or individually from here.

The output of the proc is as follows: (I used sqsh, hence the prompt, since it
auto-resizes its width as you resize the xterm.)

[25] N_UTSIRE.tempdb.1> sp_spaceused_table
[25] N_UTSIRE.tempdb.2> go
name rowtotal reserved data index_size unused
--------------------------------------------- ----------- --------------- --------------- --------------- ---------------
#matter______00000010014294376 12039 3920 KB 3910 KB 0 KB 10 KB
#synopsis____00000010014294376 6572 15766 KB 274 KB 15472 KB 20 KB
#hearing_____00000010014294376 5856 572 KB 568 KB 0 KB 4 KB
#hearing2____00000010014294376 5856 574 KB 568 KB 0 KB 6 KB
#hearing3____00000010014294376 5856 574 KB 568 KB 0 KB 6 KB
#synopsis2___00000010014294376 6572 15820 KB 274 KB 15472 KB 74 KB

(return status = 0)

Back to top


David Owen

Jan 21, 2004, 4:26:10 AM1/21/04
Archive-name: databases/sybase-faq/part19

Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Additional Information

next prev ASE FAQ


Power Sites

* Rob Verschoor's site ( is packed with useful
information about Sybase ASE and replication, as well as a couple of
quick-reference guides on steroids for both. Grab the ASE one from http:// and the replication one from http://
* Eb Barlow keeps about the most complete set of references to freeware/
public domain/shareware available for Sybase. Check out his site at http:/

Useful Documentation

* The unauthorized documentation of DBCC by Al Huntley -
* More DBCC's by KaleidaTech Associates, Inc. -
* Anthony Mandic's Installing Sybase on Solaris -
* John Knox has a good paper on the contents of the interfaces file at http:/

Sybase Resources

* Pacific Rim Network Systems Inc Sybase Resource Links
* SQL Server and Rep Server on NT
* Todd Boss has a host of useful stuff at
* I am not sure who this site belongs to, but it contains lots of good stuff.

Books, Magazines and Articles

* Sybase Documentation
* Intro to Sybase Architecture -
* SQL Forum (sadly the technical papers that were
there are gone).
* Connecting Sybase to the Web -


* sybinit4ever: Sybase ASE 11.5 ASCII-only server creation tool - http://
* Sybase Freeware and Shareware at Ed Barlow's site
* Thierry Antinolfi has a very good site packed full of useful tools and
information at
* DBD::Sybase
* DBI/DBD:Sybase on Linux
* Sybase Scheme Extensions -
* SQSH - (SQL SHell for Unix) by Scott Gray
* ISUG's Freeware Collection
* Sybase to HTML Converter
* Tool to access Sybase server with line editing and history recall http://
* Sybase connectivity libraries
* Manish I Shah's Smart Sybase Editor
* A web to Sybase interface
* Al Huntley has some nifty tools as well as the DBCC list http://
* John Knox has a nifty tli2ip and ip2tli converter at http://
* A very useful project to build a free set of Open Client libraries is at
* De Clarke has some very useful SybTcl stuff, start looking at http:// One of the really nice apps is
Sybase PerfMeter.
* An ODBC based Windows isql type client can be found at http:// (there is a free "lite" version and a
comercial version).
* Imran Hussain has written a number of Sybase utilities, they can be found
* Brian Ceccarelli's BrainTools can be accessed from http://
* Ginola Pascal's Like Sybase Central can be grabbed from http://

User Groups

* International Sybase User Group -
* Indiana Sybase User's Group
* Ontario Sybase User Group (OSUG) Website -
* DCASUG, DC Area Sybase User Group -
* New Zealand Sybase User Group -
* Wisconsin Sybase User Group -
* Tampa Bay Sybase User Group -

Related FAQs

* ASE on Linux FAQ -
* Sybperl FAQ -
* Tuning Sybase System 11 for NetWare on Compaq -
* SQR FAQ/User Group -
* EAServer FAQ -
* BusinessObjects FAQ -


* Yale Centre for Medical Informatics
* NC State University
* Simon Fraser University
* University of California
* Rutgers

Commercial Links

The following sites are placed here without any endorsement by the FAQ

* Ed Barlow's site of sites

The mother ship may be reached at

next prev ASE FAQ


12.1 What can Sybase IQ do for me?
12.2 Net-review of Sybase books
12.3 email lists
12.4 Finding Information at Sybase

# prev ASE FAQ


12.1: Sybase IQ


(This deserves to be a section all on its own, as per ASE and ASA. However, I
know absolutely nothing about it. If anyone would like to help, I would be very
grateful for some more information. My expectations are not high though.)

Sybase IQ isn't meant as just an indexing scheme, per se. It is meant as a
means of providing a low cost data warehousing solution for unplanned queries.

By the way, Sybase IQ does not use bitmapped indexes, it uses bitwise indexes,
which are quite different. [Anyone care to add a paragraph explaining the
difference? Ed.]

In data warehousing MIS generally does not know what the queries are. That also
means that the end users often don't know what the queries are. Not knowing
what the queries are turning end users loose on a 500GB operational database to
perform huge queries could prove to be unacceptable (it may bring the system
down a crawl). So, many customers are resorting to separating their operational
databases (OLTP) and data warehousing databases. By providing this separation
the operational database can continue about its business and the data warehouse
users can issue blind queries without affecting the operational systems.
Realize that operational systems may handle anywhere from hundreds to a few
thousand users and, more likely than not, require data that is highly accurate.
However, data warehouse users often don't require up to the second information
and can often wait several hours, 24 hours or even days for the most current
snapshot and generally don't require updates to be made to the data.

So, Sybase IQ can be updated a few times a day, once a day or a few times a
week. Realize that Sybase IQ is strictly a data warehousing solution. It is not
meant for OLTP systems.

Sybase IQ can also sit on top of Sybase SQL Server:

[end user]
[Sybase IQ]
[Sybase SQL Server]

What happens in this environment is that a data warehouse user can connect to
Sybase IQ. Sybase IQ will then take care of processing the query or forwarding
the query to SQL Server if it determines that the access paths in SQL Server
are faster. An example where SQL Server will be faster than Sybase IQ in
queries is when SQL Server can perform query coverate with the indexes built in
SQL Server.

The obvious question is: why not index every column in SQL Server? Because it
would be prohibitive to update any of the data. Hence, Sybase IQ, where all the
columns are making use of the bitwise index scheme. By the way, you can choose
which columns will be part of an IQ implementation. So, you may choose to have
only 30% of your columns as part of your Sybase IQ implementation. Again, I
can't stress enough that Sybase IQ is strictly for data warehousing solutions,
not OLTP solutions.

Back to top


12.2: Net Book Review


* An Introduction to Database Systems
* Sybase
* Sybase Architecture and Administration
* Developing Sybase Applications
* Sybase Developer's Guide
* Sybase DBA Survival Guide
* Guide to SQL Server
* Client/Server Development with Sybase
* Physical Database Design for Sybase SQL Server
* Sybase Performance Tuning
* Sybase Replication Server, An Administrators Guide
* Optimising Transact-SQL
* Tree and Graph Processing in SQL
* Transact SQL
* Sybase ASE, Database Consistency Checking
* Configuring & Tuning Databases on the Solaris Platform

An Introduction to Database Systems

ISBN: 0-201-54329-X Published by Addison-Wesley. Volume I and II.

This book is rightly regarded by many as the Bible of Database Management
Systems. Not a book that goes into detailed specifics of any particular
implementation (although it draws many examples from DB2), this book covers the
practical theory that underlies all relational systems as well as DBMS in
general. It is written in an easy to read, approachable style, and gives plenty
of practical examples.

Covering all aspects, from straight forward issues (such as what is a
relational database), to practical procedures (all forms of normalization are
covered, and explained). SQL is briefly covered, in just the right amount of
detail. The book includes detailed discussions of issues such as recovery,
concurrency, security and integrity, and extensions to the original relational
model. Current issues are dealt with in detail, such as client/server systems
and the Object Oriented model(s). Literally hundreds of references are included
for further reading.

If you want a book to refer to when your curiosity gets the better of you, or
when a user needs a better understanding of some important database concept,
this is it. It strikes the right balance between theory and practice, and
should be found on every database administrators book shelf.

Sybase - McGoveran and Date

ISBN: 0-201-55710-X Published by Addison-Wesley. 450 pages.

I think that once, not too long ago, this used to be the only book on Sybase
available. Now it seems to be totally out of print! It covered versions of
Sybase SQL server up to 4.8. It covered a number of aspects of Sybase,
including APT.

Sybase Architecture and Administration - Kirkwood

ISBN: 0-13-100330-5 Published by Ellis Horwood. 404 pages.

This is a good book covering Sybase systems up to and including System 10. It
deals to a good depth the architecture and how most of the functions such as
the optimiser work. It explains in a readable style how devices work, and how
indexes are stored and manipulated.

Developing Sybase Applications - Worden

ISBN: 0-672-30700-6 Published by SAMS. ??? pages. (Inc CD.)

This books seems very similar to number 4 to me and so I have not bought it. I
have browsed through several times in the book shop, and decided that his other
book covers a good deal of this. There are chapters on Visual Basic and

Sybase Developer's Guide - Worden

ISBN: 0-672-30467-8 Published by SAMS. 698 pages. (Inc disk.)

This is a big book that does not, in my opinion, cover very much. In fact the
disk that is included contains DBATools, and that seems to sum up the first 50%
of the book. There is a fair amount of coverage of the general architecture and
how to install Sybase. Transact SQL, cursors and stored procedures get a fair
covering, as does using C/C++ with DB-Library. (I can find no mention of
CT-Library.) Unfortunately quite a lot of the book covers general issues which
are not covered in sufficient depth to be useful, and just seem to be there to
give the book bulk. Maybe as a developer's guide, his other book would be a
better buy. This would probably be most useful to a small company implementing
a Sybase database.

Sybase DBA Survival Guide - Jeff Garbus, David Solomon, Brian Tretter

ISBN: 0-672-30651-4 Published by SAMS. 506 pages. (Inc disk.)

This book is good, and is a great help in a crisis. It includes lots of useful
ideas and strategies for most (if not all) of the DBA tasks. It covers Sybase
SQL Server on all platforms. It does not specifically cover any of the
Microsoft versions, and certainly not version 6. It does cover System 10. It is
very good at explaining the output from things like the DBCC commands. There is
also a good section on what to look for in the errorlog. If you are a DBA and
want to buy just one book, I would recommend this one since it covers just
about everything you will need to know. This book is filled with little hints,
tips and warnings which are very useful. They have certainly saved my bacon on
a number of occasions, and have even made me look a real star more than once.

Guide to SQL Server - Aloke Nath

ISBN: 0-201-62631-4 Published by Addison-Wesley. 567 pages.

This book is solely about MS SQL Server, covering 4.2 for OS/2 and SQL Server
NT. It is not bad, but does seem to regurgitate a lot from the Sybase [sic]
manuals. Its coverage is fairly broad dealing with Transact SQL on the one hand
through to client configuration on the other. It does cover the aspects of MS
Sqlserver that are different from Sybase, (dbcc perfmon for instance) but it
does not flag any as such. Probably a good buy if you only have MS Sqlserver
and never intend looking at Sybase.

Client/Server Development with Sybase - Alex Berson and George Anderson,

ISBN: 0-07-005203-4 Published by McGraw-Hill. 743 pages.

I have used this book as a reference when system manuals where not available.
It is much more useful on how thing work and what approach to use rather than

The breadth of topics pleased me - all the right jargon is mentioned. The
introduction mentions CORBA and DCE. Sybase RPC is compared to UNIX RPCs.
Middle ware products are discussed. Talks with our sales rep. about the OMNI
and NetGateway product where greatly assisted by using the diagrams in the Open
Server and Gateways chapter.

Like any text, it is dated (as it is printed). The Netgateway diagram does not
show a TCP/IP interface to MVS. However, the information provided is not really
diminished. This goes back to the fact that this is a How Things Work and How
to Use Things book, not a compilation of details on a single version.

Physical Database Design for Sybase SQL - Rob Gillette, Dean Meunch, Jean

ISBN: 0-13-161523-8 Published by Prentice-Hall. 225 pages.

Supposedly the first in a series from Sybase Professional Services, espousing
the Sybase Development Framework or SDF (tm). I've seen no more books, and have
never heard any more about SDF. This book is a reasonable attempt to guide
developers through the process of turning a logical database design into a
physical Sybase implementation.

Topics include:

* Defining Tables and Columns
* Defining Keys
* Identifying Critical Transactions
* Adding Redundant Columns
* Adding Derived Columns
* Collapsing Tables
* Splitting Tables
* Handling Supertypes and Subtypes
* Duplicating Parts of Tables
* Adding Tables for derived Data
* Handling Vector Data
* Generating Sequence Numbers
* Specifying Indexes
* Maintaining Row Uniqueness
* Handling Domain Restrictions
* Handling Referential Integrity
* Maintaining Derived and Redundant data
* Handling Complex Integrity Constraints
* Controlling Access to Data
* Managing Object Sizes
* Recommending Object Placement
* Required Inputs to Physical DB Design
* Naming Guidelines

Covers System 10. Lots of good practical hints and guidelines on database
design. In the absence of any competition - a definite recommendation for
newcomers to Sybase database design.

Sybase Performance Tuning - Shaibal Roy & Marc B. Sugiyama

ISBN 0-13-442997-4 Published by Prentice Hall ( 622

Covers the topics:

* Tuning for performance
* Hardware and system software
* Sybase product and feature overview
* SQL Server - form and structure
* SQL Server - methods and features
* Physical database design
* Application development
* Monitoring SQL Server
* Instrumenting SQL Code
* Transaction processing performance
* Query processing performance
* Batch processing performance
* Advanced topics - I/O subsystems, named caches and buffer pools and other
* Also a load of extra configuration details.

A pleased customer on the above book:

Just a quick note to let you know of a very good book on Performance Tuning
that isn't mentioned in the Sybase FAQ. I bought it a little while ago and
has quickly become invaluable. It's by two pretty gifted Sybase Engineers
in the SQL Server Performance Team and covers loads of things up to and
including System 11. It deserves to become as big as the bible :)

This I believe is the Holy Grail of Sybase books that a lot of people have
been looking for - an exaggerated claim perhaps - but a damn fine book.

Sybase Replication Server - An Administrators Guide - John Kirkwood and Garry

ISBN 0-9537155-0-7 Published by Kirkwood Associates Ltd

This is a very readable introduction and guide to Sybase replication. Having
just installed and configured my first repserver site, this book proved very
useful. Rather than give a whole break down of the contents, the book is
featured on their website where a full
breakdown of the contents etc can be obtained. This is one of the few books on
replication and I can thoroughly recommend it to new users and people with a
fair amount of replication experience. I cannot say whether or not it would be
useful to people with a lot of replication experience since I don't know anyone
of that ilk who has read it.

Optimising Transact-SQL

SQL Forum Press; ISBN: 0964981203

This book is definitely not for the beginner. It covers what the author
describes as characteristic functions. These are functions that allow you to a
lot of data manipulation with a single pass of table. Whether you like them or
not is completely a matter of taste. Read the reviews on to see the
truth in that statement. The book pre-dates the inclusion of the CASE statement
into most SQL dialects, including T-SQL, and it is certainly true that you can
use the case statement to do a lot of what charactersitic functions can do.
However, table pivoting is definitely an exception and there are probably
others. Personally I like the book since it shows a completely different way of
thinking about problems and their solution.

Possibly tricky to get hold of.

Tree and Graph Processing in SQL

SQL Forum Press; ISBN: ???

The only thing I have on this is the following:

The best work I've ever read on the subject of Tree and Graph processing in SQL
is strangely entitled: "Tree and Graph Processing in SQL" by Dr. David
Rozenshtein et al.

Paul Horan [TeamSybase]

There are no reviews on Amazon at this time, so I cannot even send you there.

Possibly tricky to get hold of.

Transact SQL Programming

ISBN 1-56592-401-0 Published by O'Reilly

This book covers both the Sybase and Microsoft dialects of T-SQL. There is a
very clear side-by-side comparison of the two sets of features. There is also
an excellent description of all of the Microsoft features. I find the same is
not so true about the Sybase parts. The actual book is up to nornal O'Reilly
standards and is very readable.

Sybase ASE, Database Consistency Checking

ISBN 0-9537155-1-5 Published by Kirkwood Associates Ltd

This is John Kirkwood's latest offering. The title tells all as far as subject
matter is concerned. An excellent offering, very readable. Covers a lot of the
undocumented dbcc's plus lots of other good stuff. I would have to say a
definite must for all DBAs. Obviously not a book for developers, unless they
are also part time DBAs. However, if you wanted to get a better understanding
of how Sybase internal storeage works, this covers a lot of that.

At the time of writing the book was available from but not I am not sure if this is likely to change or not. You can always
get it from his own site,

Configuring & Tuning Databases on the Solaris Platform

ISBN: 0-13-083417-3 Published by Sun Microsystems Press. 502 pages.

An excellent book that slices and dices from both OS and database perspectives.
Oracle, Sybase (ASE and a bit of IQ-M), Informix XPS, and DB2 are covered. The
core subject is covered in a drill-down fashion and includes details between
various versions (including Oracle 9i, ASE 12.5, and Solaris 2.8) The author
also covers database architectures, application workloads, capacity planning,
benchmarking (including the various TPC flavors), RAID (including Sun Volume
Manager and Veritas), performance metrics, and JAVA. Even for the non-SUN
environments this book may be quite useful.

Back to top


12.3: email lists

email lists
| | | | | |
| Name | Type | Description | Emails/Day | How to subscribe |
| | | | | Send mail to |
| sqsh-users | YahooGroups | Bugs/issues/complaints about sqsh - see Q9.12. | < 1 | |
| | | | | Goto for more details. |
| | | | | Send email to |
| sybase-dba | YahooGroups | Discussion of administration of Sybase databases | < 1 | |
| | | | | Goto for more details. |
| | | | | Send email to LIST...@LISTSERV.UCSB.EDU |
| | | | | |
| SYBASE-L | Listserv | Discussion of SYBASE Products, Platforms & Usage | ~ 10 - 20 | with a subject of |
| | | | | |
| | | | | SUBSCRIBE SYBASE-L your name |
| | | Exactly the same list as above, but through Yahoo. | | |
| | | | | |
| | | One of the nice features of having the group | | |
| SYBASE-L | YahooGroups | mirrored at Yahoo is that it makes trawling the | ~ 10 - 20 | Send email to |
| | | archives very easy. Goto the website, there are | | |
| | | enough links to it already on this page, feed | | |
| | | 'sybase-l' into the search box, select the correct | | |
| | | group and read. | | |
| | | | | Send email to with a subject of |
| Sybperl | Listserv | Discussion of things Perl and Sybase | < 1 | |
| | | | | SUBSCRIBE SYBPERL-L yo...@email.address |
| | | | | Subscribe by going to |
| ase-linux | Majordomo | Specific discussion of Sybase on Linux | 1 - 5 | |
| | | | | |

Back to top


12.4: Finding Information at Sybase


Sybase has now gone completely Portal or is that Postal? The front desk is
most definitely, which leads to a very polished site. A more
useful thing to do is to sign up at the site for your own particular
perspective. You can do this by going to, where you can
configure your account to only show you those parts of the system that you are
interested or are relevant to you. The links below give you a couple of faster
pointers to some specific sites.

Sybase Web Sites

Caveat: Sybase has implemented a portal. Quite a number of the old links that
were/are in the FAQ now nolonger work. The following is tried and tested as of
today (20th September 2001) but could well become out-of-date. Let's hope not!

Here's a list of internet web sites at Sybase:

Sybase corporate (search, browse)
This is the start of the portal. From here you can get everywhere. The
following links simply allow for a more direct route to a few places.
Sybase Technical Support Web site (gateway, meta-se