Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Sybase FAQ: 1/19 - index

4 views
Skip to first unread message

David Owen

unread,
Apr 20, 2004, 9:44:59 AM4/20/04
to
Archive-name: databases/sybase-faq/part1
URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Sybase Frequently Asked Questions


Sybase FAQ Home PageAdaptive Server Enterprise FAQAdaptive Server Anywhere FAQ
Replication Server FAQSearch the FAQ
Sybase FAQ

Main Page

* Where can I get the latest release of this FAQ?
* What's new in this release?
* How can I help with the FAQ?
* Who do I tell about problems in the FAQ?
* Acknowledgements and Thanks
* Hall of Fame
* Copyright and Disclaimer
* General Index

Main | ASE | ASA | REP | Search

-------------------------------------------------------------------------------

Where can I get the latest release of this FAQ?

International Sybase User Group

The main page for this site is http://www.isug.com/Sybase_FAQ. It is hosted
there by kind permission of the International Sybase User Group (http://
www.isug.com) as a service to the Sybase community.

To get a text version of this FAQ:

ftp://ftp.midsomer.org/pub/FAQ_txt_tar.Z

or

ftp://ftp.midsomer.org/pub/FAQ_txt.zip

If you want uncompressed versions of the various sections, they can be got
from ASE, ASA & REP.

To get the HTML for this FAQ:

ftp://ftp.midsomer.org/pub/FAQ_html_tar.Z

or

ftp://ftp.midsomer.org/pub/FAQ_html.zip

Last major update: 21st February 2003.

Back to Top

-------------------------------------------------------------------------------

What's new in this release?

Release 1.9

* Running multiple servers on a single server (UNIX and NT).

Back to Top

-------------------------------------------------------------------------------

What's happening with the FAQ?

I have not had a lot of time to spend on the FAQ this year. Mainly, this is
down to work, or the lack of it. I know, we are all in the same boat. Well, it
has meant that I have had a lot less free time than I used to and as a result
the FAQ has not been kept as up to date as I would like. Sadly, the work I have
been doing is with those other database vendors, but we won't name them here.
Anyway, that is the sob story over and done with. If anyone thinks that they
would like to see more effort applied, I would be happy to hand the mantle
over. Since the amount of help that I have actually seen amounts to about
practically none, then I am sure I will not be over-run with offers! I will
definitely have more time come January and plan some serious work on it then.

Back to Top

-------------------------------------------------------------------------------

How can I help with the FAQ?

I have had offers from a couple of people to write sections, but if you feel
that you are in a position to add support for a section, or if you have some
FAQs to add, please let me know. This is a resource that we should all
support, so send me the stuff and I will include it.

Typos and specific corrections are always very useful. Less useful is the
general I don't think that section x.y.z is very understandable. Sorry to sound
harsh, but what I need is actual text that is more readable. Better still is
actual HTML that makes it stand out and sing (if necessary)!

Currently I am looking for maintainers of the following sections Replication,
Adaptive Server Anywhere, IQ server, MPP Server and Open Server. I am not sure
whether to add a section for Omni Server. I sort of feel that since Omni has
been subsumed into ASE as CIS that any FAQs should really be incorporated
there. However, if you know of some good Omni gotchas or tips, whether they
are still there in CIS or not, please send them in. I certainly plan to have a
subsection of ASE dealing with CIS even if Omni does not get its own major
section. I also think that we need sections on some of the really new stuff.
Jaguar and the new engines also deserve a spot.

Another very useful way that you can help is in getting people to update their
links. I have seen lots of links recently, some still pointing to Pablo's
original, some pointing to Tom's site but referring to it as coming from the
SGI site.

Back to Top

-------------------------------------------------------------------------------

Who do I tell about problems in the FAQ?

The current maintainer is David Owen ( do...@midsomer.org) and you can send
errors in the FAQ directly to me. If you have an FAQ item (both the question
and the answer) send it to syb...@midsomer.org and I will include it.

Do not send email to any of the officials at ISUG, they are simply hosting the
FAQ and are not responsible for its contents.

Also, do not send email to Sybase, they are not responsible for the contents
either. See the Disclaimer.

Back to Top

-------------------------------------------------------------------------------

Acknowledgements and Thanks

Special thanks must go to the following people for their help in getting this
FAQ to where it is today.

* Pablo Sanchez for getting the FAQ off the ground in the first place and for
many years of dedicated work in maintaining it.

* Anthony Mandic (a...@peppler.org) for a million things. Patiently answering
questions in all of the Sybase news groups, without which most beginners
would be lost. For supporting and encouraging me in getting this FAQ
together and for providing some pretty neat graphics.

* The ISUG, especially Luc Van der Veurst (lu...@az.vub.ac.be) and Michael
Peppler (mpep...@peppler.org), for hosting this FAQ and providing support
in setting up the website.

* The members of the various news groups and mailing lists who, like Anthony,
provide unstinting support. The list is fairly long, but I think that Bret
Halford (br...@sybase.com) deserves a mention. If you go to Google News
and do a search, he submits almost as many replies as Anthony.

Back to Top

-------------------------------------------------------------------------------

Hall of Fame

I am not sure how Pablo chose his select list, there is certainly no question
as to their inclusion. I know that there are a couple of awards that the ISUG
give out each year for the people that the ISUG members believe have
contributed most to the Sybase community that year. I think that this section
should honour those people that deserve an award each and every year. If you
know of a candidate, let me know and I will consider his or her inclusion.
Self nominations are not acceptable :-)

The following people have made it to the Sybase FAQ Hall of Fame:

* Michael Peppler (mpep...@peppler.org) For Sybperl and all of the other
tools of which he is author or instigator plus the ceaseless support that
he provides through countless mailing lists, newsgroups and directly via
email.

* Scott Gray (gr...@voicenet.com) Father of sqsh, much more than simply a
replacement for isql. How anyone developing or administering Sybase can
survive without it, I will never know.

* Pablo Sanchez ( www.hpdbe.com) Pablo got the first web based FAQ off the
ground, wrote most (all?) of the first edition and then maintained it for a
number of years. He did a fantastic job, building a resource that is
worth its weight in gold.

Back to Top

-------------------------------------------------------------------------------

Copyright and Disclaimer

Distribution

You are free to copy or distribute this FAQ in whole or in part, on any medium
you choose provided that you:

* include this Copyright and Disclaimer notice;
* do NOT distribute or copy, in any fashion, with the intention of making a
profit from its use;
* give FULL attribution to the original authors.

Disclaimer

This FAQ is provided as is without any express or implied warranties. Whilst
every endeavour has been taken to ensure the accuracy of the information
contained within the articles, the author, nor any of the contributors, assume
responsibility for errors or omissions, or for damages resulting from the use
of the information contained herein.

If you are not happy about performing any of the suggestions contained within
this FAQ, you are probably better off calling Sybase Technical Support.

Copyright

This site and all its contents belongs to the Sybase FAQ (http://www.isug.com/
Sybase_FAQ).

Unless explicitly stated in an article, all material within this FAQ is
copyrighted. The primary copyright holders are David Owen and Pablo Sanchez.
However, all contributed material is, and will remain, the property of the
respective authors and contributors.

Back to Top

-------------------------------------------------------------------------------
ASE

1.1: Basic ASE Administration

1.1.1 What is SQL Server and ASE anyway?
1.1.2 How do I start/stop ASE when the CPU reboots?
1.1.3 How do I move tempdb off of the master device?
1.1.4 How do I correct timeslice -201?
1.1.5 The how's and why's on becoming Certified.
1.1.6 RAID and Sybase
1.1.7 How to swap a db device with another
1.1.8 Server naming and renaming
1.1.9 How do I interpret the tli strings in the interface file?
1.1.10 How can I tell the datetime my Server started?
1.1.11 Raw partitions or regular files?
1.1.12 Is Sybase Y2K (Y2000) compliant?
1.1.13 How can I run the ASE upgrade manually?
1.1.14 We have lost the sa password, what can we do?
1.1.15 How do I set a password to be null?
1.1.16 Does Sybase support Row Level Locking?
1.1.17 What platforms does ASE run on?
1.1.18 How do I backup databases > 64G on ASE prior to 12.x?

1.2: User Database Administration

1.2.1 Changing varchar(m) to varchar(n)
1.2.2 Frequently asked questions on Table partitioning
1.2.3 How do I manually drop a table?
1.2.4 Why not create all my columns varchar(255)?
1.2.5 What's a good example of a transaction?
1.2.6 What's a natural key?
1.2.7 Making a Stored Procedure invisible
1.2.8 Saving space when inserting rows monotonically
1.2.9 How to compute database fragmentation
1.2.10 Tasks a DBA should do...
1.2.11 How to implement database security
1.2.12 How to shrink a database
1.2.13 How do I turn on auditing of all SQL text sent to the server
1.2.14 sp_helpdb/sp_helpsegment is returning negative numbers

1.3: Advanced ASE Administration

1.3.1 How do I clear a log suspend'd connection?
1.3.2 What's the best value for cschedspins?
1.3.3 What traceflags are available?
1.3.4 How do I use traceflags 5101 and 5102?
1.3.5 What is cmaxpktsz good for?
1.3.6 What do all the parameters of a buildmaster -d<device> -yall mean?
1.3.7 What is CIS and how do I use it?
1.3.8 If the master device is full how do I make the master database
bigger?
1.3.9 How do I run multiple versions of Sybase on the same server?
1.3.10 How do I capture a process's SQL?

1.4: General Troubleshooting

1. How do I turn off marked suspect on my database?
2. On startup, the transaction log of a database has filled and recovery has
suspended, what can I do?
3. Why do my page locks not get escalated to a table lock after 200 locks?

1.5: Performance and Tuning

1.5.1 What are the nitty gritty details on Performance and Tuning?
1.5.2 What is best way to use temp tables in an OLTP environment?
1.5.3 What's the difference between clustered and non-clustered indexes?
1.5.4 Optimistic versus pessimistic locking?
1.5.5 How do I force an index to be used?
1.5.6 Why place tempdb and log on low numbered devices?
1.5.7 Have I configured enough memory for ASE?
1.5.8 Why should I use stored procedures?
1.5.9 I don't understand showplan's output, please explain.
1.5.10 Poor man's sp_sysmon.
1.5.11 View MRU-LRU procedure cache chain.
1.5.12 Improving Text/Image Type Performance

1.6: Server Monitoring

1.6.1 What is Monitor Server and how do I configure it?
1.6.2 OK, that was easy, how do I configure a client?

2.1: Platform Specific Issues - Solaris

2.1.1 Should I run 32 or 64 bit ASE with Solaris?
2.1.2 What is Intimate Shared Memory or ISM?

2.2: Platform Specific Issues - NT/2000

2.2.1 How to Start ASE on Remote NT Servers
2.2.2 How to Configure More than 2G bytes of Memory for ASE on NT
2.2.3 Installation Issues

2.3: Platform Specific Issues - Linux

2.3.1 ASE on Linux FAQ

3: DBCC's

3.1 How do I set TS Role in order to run certain DBCCs...?
3.2 What are some of the hidden/trick DBCC commands?
3.3 Other sites with DBCC information.
3.4 Fixing a Munged Log

Performing any of the above may corrupt your ASE installation. Please do
not call Sybase Technical Support after screwing up ASE. Remember, always
take a dump of the master database and any other databases that are to be
affected.

4: isql

4.1 How do I hide my password using isql?
4.2 How do I remove row affected and/or dashes when using isql?
4.3 How do I pipe the output of one isql to another?
4.4 What alternatives to isql exist?
4.5 How can I make isql secure?

5: bcp

5.1 How do I bcp null dates?
5.2 Can I use a named pipe to bcp/dump data out or in?
5.3 How do I exclude a column?

6.1: SQL Fundamentals

6.1.1 Are there alternatives to row at a time processing?
6.1.2 When should I execute an sp_recompile?
6.1.3 What are the different types of locks and what do they mean?
6.1.4 What's the purpose of using holdlock?
6.1.5 What's the difference between an update in place versus a deferred
update? - see Q1.5.9
6.1.6 How do I find the oldest open transaction?
6.1.7 How do I check if log truncation is blocked?
6.1.8 The timestamp datatype
6.1.9 Stored Procedure Recompilation and Reresolution
6.1.10 How do I manipulate binary columns?
6.1.11 How do I remove duplicate rows from a table?

6.2: SQL Advanced

6.2.1 How to emulate the Oracle decode function/crosstab
6.2.2 How to implement if-then-else within a select-clause.
6.2.3 deleted due to copyright hassles with the publisher
6.2.4 How to pad with leading zeros an int or smallint.
6.2.5 Divide by zero and nulls.
6.2.6 Convert months to financial months.
6.2.7 Hierarchy traversal - BOMs.
6.2.8 Is it possible to call a UNIX command from within a stored
procedure or a trigger?
6.2.9 Information on Identities and Rolling your own Sequential Keys
6.2.10 How can I execute dynamic SQL with ASE
6.2.11 Is it possible to concatenate all the values from a column and
return a single row?
6.2.12 Selecting rows N to M without Oracle's rownum?
6.2.13 How can I return number of rows that are returned from a grouped
query without using a temporary table?

6.3: Useful SQL Tricks

6.3.1 How to feed the result set of one stored procedure into another.
6.3.2 Is it possible to do dynamic SQL before ASE 12?

7: Open Client

7.1 What is Open Client?
7.2 What is the difference between DB-lib and CT-lib?
7.3 What is this TDS protocol?
7.4 I have upgraded to MS SQL Server 7.0 and can no longer connect from
Sybase's isql.
7.5 The Basics of Connecting to Sybase
7.6 Connecting to ASE using ODBC
7.7 Which version of Open Client works with which ASE?
7.8 How do I tell the version of Open Client I am running?

9: Freeware

9.0 Where is all the code and why does Section 9 suddenly load in a
reasonable amount of time?

Stored Procedures

9.1.1 sp_freedevice - lists device, size, used and free.
9.1.2 sp_dos - This procedure graphically displays the scope of a
object
9.1.3 sp_whodo - augments sp_who by including additional columns: cpu,
I/O...
9.1.4 sp__revroles - creates DDL to sp_role a mirror of your SQL
Server
9.1.5 sp__rev_configure - creates DDL to sp_configure a mirror of your
SQL Server
9.1.6 sp_servermap - overview of your SQL Server
9.1.7 sp__create_crosstab - simplify crosstable queries
9.1.8 sp_ddl_create_table - creates DDL for all user tables in the
current database
9.1.9 sp_spaceused_table
9.1.10 SQL to determine the space used for an index.
9.1.11 sp_helpoptions - Shows what options are set for a database.
9.1.12 sp_days - returns days in current month.
9.1.13 sp__optdiag - optdiag from within isql
9.1.14 sp_desc - a simple list of a tables' columns
9.1.15 sp_lockconfig - Displays locking schemes for tables.

Shell Scripts

9.2.1 SQL and sh(1)to dynamically generate a dump/load database
command.
9.2.2 update statistics script

Perl/Sybperl

9.3.1 SybPerl - Perl interface to Sybase.
9.3.2 dbschema.pl - Sybperl script to reverse engineer a database.
9.3.3 ddl_insert.pl - creates insert DDL for a table.
9.3.4 int.pl - converts

12: Miscellany

12.1 What can Sybase IQ do for me?
12.2 Net-review of Sybase books
12.3 email lists
12.4 Finding Information at Sybase

ASA

Adaptive Server Anywhere

0.0 Preamble
0.1 What is ASA?
0.2 On what platforms is ASA supported?
0.3 What applications is ASA good for?
0.4 When would I choose ASA over ASE?
0.5 Does ASA Support Replication?
0.6 What is ASA UltraLite?
0.7 Links for further information

REP

Introduction to Replication Server

1.1 Introduction
1.2 Replication Server Components
1.3 What is the Difference Between SQL Remote and Replication
Server?

Replication Server Administration

2.1 How can I improve throughput?
2.2 Where should I install replication server?
2.3 Using large raw partitions with Replication Server on Unix.
2.4 How to replicate col = col + 1
2.5 What is the difference between an LTMs an a RepAgent?
2.6 Which Should I choose, RepAgent or LTM?

Replication Server Trouble Shooting

3.1 Why am I running out of locks on the replicate side?
3.2 Someone was playing with replication and now the transaction log
on OLTP is filling.

Additional Information/Links



4.1 Links
4.2 Newsgroups

David Owen

unread,
Apr 20, 2004, 9:45:02 AM4/20/04
to
Archive-name: databases/sybase-faq/part3

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Sybase Frequently Asked Questions


Sybase FAQ Home PageAdaptive Server Enterprise FAQAdaptive Server Anywhere FAQ

Repserver FAQSearch the FAQ
[bar]

Sybase Replication Server

1. Introduction to Replication Server
2. Replication Server Administration
3. Troubleshooting Replication Server
4. Additional Information/Links


Introduction to Replication Server

1.1 Introduction
1.2 Replication Server Components
1.3 What is the Difference Between SQL Remote and Replication Server?

Thanks go to Manish I Shah for major help with this introduction.

next prev ASE FAQ

-------------------------------------------------------------------------------

1.1 Introduction

-------------------------------------------------------------------------------

What is Replication Server

Replication Server moves transactions (insert, updates and deletes) at the
table level from a source dataserver to one or more destination dataservers.
The dataserver could be ASE or other major DBMS flavour (including DB2,
Informix, Oracle). The source and destinations need not be of the same type.

What can it do ?

* Move data from one source to another.
* Move only a subset of data from source to destination. So, you can
subscribe to a subset of data, or a subset of the columns, in the source
table, e.g. select * from clients where state = NY
* Manipulation/transformation of data when moving from source to destination.
E.g. it can map data from a data-type in DB2 to an equivalent in Sybase.*
* Provide a warm-standby system. Can be incorporated with Open Switch to
provide a fairly seamless fail-over environment.
* Merge data from several source databases into one destination database
(could be for a warehouse type environment for example).
* Move data through a complicated network down to branch offices, say, only
sending the relevant data to each branch.

(* This is one of Sybase replication's real strengths, the ability to define
function string classes which allow the conversion of statements from one SQL
dialect to match the dialect of the destination machine. Ed)

How soon does the data move

The data moves asynchronously. The time it takes to reach the destination
depends on the size of your transaction, level of activity in that particular
database (a database as in Sybase systems), the length of the chain (one or
more replication servers that the transaction has to pass through to reach the
destination), the thickness of pipe (network), how busy your replication server
is etc. Usually, on a LAN, for small transactions, this is about a second.

Back to top

-------------------------------------------------------------------------------

1.2 Replication Server Components

-------------------------------------------------------------------------------

Basic

Primary Dataserver

The source of data where client applications enter/delete and modify data. As
mentioned before, this need not be ASE, it can be Microsoft SQL Server, Oracle,
DB2, Informix. (I know that I should get a complete list.)

Replication Agent/Log Transfer Manager

Log Transfer Manager (LTM) is a separate program/process which reads
transaction log from the source server and transfers them to the replication
server for further processing. With ASE 11.5, this has become part of ASE and
is now called the Replication Agent. However, you still need to use an LTM for
non-ASE sources. I imagine there is a version of LTM for each kind of source
(DB2, Informix, Oracle etc). When replication is active, you see one
connection per each replicated database in the source dataserver (sp_who).

Replication Server (s)

The replication server is an Open Server/Open Client application. The server
part receives transactions being sent by either the source ASE or the source
LTM. The client part sends these transactions to the target server which could
be another replication server or the final dataserver. As far as I know, the
server does not include the client component of any of the other DBMSes out of
the box.

Replicate (target) Dataserver

Server in which the final replication server (in the queue) will repeat the
transaction done on the primary. You will see a connection, one for each target
database, in the target dataserver when the replication server is actively
transferring data (when idle, the replication server disconnects or fades out
in replication terminology).

Back to top

-------------------------------------------------------------------------------

1.3 What is the Difference Between Replication Server and SQL Remote?

-------------------------------------------------------------------------------

Both SQL Remote and Replication Server perform replication. SQL Remote was
originally part of the Adaptive Server Anywhere tool kit and is intended for
intermittent replication. (The classic example is that of a salesman
connecting on a daily basis to upload sales and download new prices and
inventory.) Replication Server is intended for near real-time replication
scenarios.

Back to top

-------------------------------------------------------------------------------

next prev ASE FAQ

Replication Server Administration

2.1 How can I improve throughput?
2.2 Where should I install replication server?
2.3 Using large raw partitions with Replication Server on Unix.
2.4 How to replicate col = col + 1
2.5 What is the difference between an LTMs an a RepAgent?
2.6 Which Should I choose, RepAgent or LTM?

next prev ASE FAQ

-------------------------------------------------------------------------------

2.1 How can I improve throughput?

-------------------------------------------------------------------------------

Check the Obvious

First, ensure that you are only replicating those parts of the system that need
to be replicated. Some of this is obvious. Don't replicate any table that
does not need to be replicated. Check that you are only replicating the
columns you need. Replication is very sophisticated and will allow you to
replicate both a subset of the columns as well as a subset of the rows.

Replicate Minimum Columns

Once the replication is set up and synchronised, it is only necessary to
replicate those parts of the primary system that actually change. You are only
replicating those rows and columns that need to be replicated, but you only
need to replicate the actual changes. Check that each replication definition
is defined using the clause:

create replication definition rep_def_name
with primary...
...
replicate minimal columns

Second Replication Server

This might be appropriate in a simple environment on systems with spare cycles
and limited space on the network. When Sybase replicates from a primary to a
replicate using only one replication server the data is transferred across the
network uncompressed. However, the communication between two replication
servers is compressed. By installing a second replication server it is
possible to dramatically reduce the bandwidth needed to replicate your data.

Dedicated Network Card

Obviously, if replication is sharing the same network resources that all of the
clients are using, there is the possibility for a bottleneck if the network
bandwidth is close to saturation. If a second replication server is not going
to cut it since you already have one or there are no spare cycles, then a
second network card may be the answer.

First, you will need to configure ASE to listen on two network connections.
This is relatively straightforward. There is no change to the client
configuration. They all continue to talk to Sybase using the same connection.
When defining the replication server, ensure that the interfaces/sql.ini entry
that it uses only has the second connection in it. This may involve some
jiggery pokery with environment variables, but should be possible, even on NT!
You need to be a little careful with network configuration. Sybase will
communicate with the two servers on the correct address, but if the underlying
operating system believes that both clients and repserver can be serviced by
the same card, then it will use the first card that it comes to. So, if you
had the situation that all of the clients, ASE and the replication server were
on 192.168.1.0, and the host running ASE had two cards onto this same segment,
then it would choose to route all packets through the first card. OK, so this
is a very simplistic error to correct, but similar things can happen with more
convoluted and, superficially, better thought out configurations.

+---------+ +-----------+ +-----------+
| |--> NE(1) --> All Clients... | | | |
| Primary | | repserver | | replicate |
| |--> NE(2) --------------------->| |-->| |
| | | | | |
+---------+ +-----------+ +-----------+

So, configure NE(1) to be on 192.168.1.0, say, and NE(2) to be on 192.168.2.0
and all should be well. OK, so my character art is not perfect, but I think
that you get the gist!

No Network Card

If RepServer resides on the same physical machine as either the primary or the
replicate, it is possible to use the localhost or loopback network device. The
loopback device is a network interface that connects back to itself without
going through the network interface card. It is almost always uses the IP
address 127.0.0.1. So, by applying the technique described above, but instead
of using a dedicated network card, you use the loopback device. Obviously, the
two servers have to be on the same physical machine or it won't work!

Back to top

-------------------------------------------------------------------------------

2.2 Where should I install replication server?

-------------------------------------------------------------------------------

A seemingly trivial question, but one that can cause novices a bit of worry.

There are three answers: on the primary machine, on the replicate machine or on
a completely separate machine. There is no right answer, and if you are doing
an initial install it probably pays to consider the future, consider the
proposed configuration and have a look at the load on the available machines.

It is probably fair to say that replication is not power hungry but neither is
it free. If the primary is only just about coping with its current load, then
it might be as well looking into hosting it on another machine. The argument
applies to the replicate. If you think that network bandwidth may be an issue,
and you may have to add a second replication server, you may be better off
starting with repserver running on the primary. It is marginally easier to add
a repserver to an existing configuration if the first repserver is on the
primary.

Remember that a production replication server on Unix will require raw devices
for the stable devices and that these can be more than 2GB in size. If you are
restricted in the number of raw partitions you have available on a particular
machine, then this may have a bearing. See Q2.3.

Installing replication server on its own machine will, of course, introduce all
sorts of problems of its own, as well as answering some. The load on the
primary or the replicate is reduced considerably, but you are definitely going
to add some load to the network. Remember that ASE->Rep and Rep->ASE is
uncompressed. It is only Rep->Rep that is compressed.

Back to top

-------------------------------------------------------------------------------

2.3 Using large raw partitions with Replication Server on Unix.

-------------------------------------------------------------------------------

It is a good practice with production installations of Replication Server on
Unix that you use raw partitions for the stable devices. This is for just the
same reason that production ASE's use raw partitions. Raw devices can be a
maximum of 2GB with replication server up to release 11.5. (I have not checked
12.)

In order to utilise a raw partition that is greater than 2GB in size you can do
the following (remember all of the cautionary warnings about trying this sort
of stuff out in development first!):

add partition firstpartition on '/dev/rdsk/c0t0d0s0' with size 2024
go
add partition secondpartition on '/dev/rdsk/c0t0d0s0' with size 2024
starting at 2048
go

Notice that the initial partition is sized at 2024MB and not 2048. I have not
found this in the documentation, but replication certainly seems to have a
problem allocating a full 2GB. Interestingly, do the same operation through
Rep Server Manager and Sybase central caused no problems at all.

Back to top

-------------------------------------------------------------------------------

2.4 How to replicate col = col + 1

-------------------------------------------------------------------------------

Firstly. While the rule that you never update a primary key may be a
philosophical choice in a non-replicated system, it is an architectural
requirement of a replicated system.

If you use simple data replication, and your primary table is:

id
---
1
2
3

and you issue a:

update table set id=id+1

Rep server will do this in the replicate:

begin tran
update table set id=2 where id=1
update table set id=3 where id=2
update table set id=4 where id=3
commit tran

Hands up all who can see a bit of a problem with this! Remember, repserver
doesn't replicate statements, it replicates the results of statements.

One way to perform this update is to build a stored procedure on both sides
that executes the necessary update and replicate the stored procedure call.

Back to top

-------------------------------------------------------------------------------

2.5 What is the difference between an LTM and a RepAgent?

-------------------------------------------------------------------------------

As described in Section 1.2, Log Transfer Managers (LTMs) and RepAgents are the
processes that transfer data between ASE and the Replication Server.

LTMs were delivered with the first releases of Replication Server. Each LTM is
a separate process at the operating system level that runs along side ASE and
Replication Server. As with ASE and Replication Server, a RUN_<ltm_server> and
configuration file is required for each LTM. One LTM is required for each
database being replicated.

Along with ASE 11.5 a new concept was introduced, that of RepAgent. I am not
sure if you needed to use RepServer 11.5 as well, or whether the RepAgents
could talk to earlier versions of Replication Server. Each RepAgent is, in
effect, a slot-in replacement for an LTM. However, instead of running as
separate operating system process, it runs as a thread within ASE. Pretty much
all of the requirements for replication using an LTM apply to the RepAgents.
One per database being replicated, etc. but now you do not need to have
separate configuration files.

Back to top

-------------------------------------------------------------------------------

2.6 Which should I use, RepAgent or LTM?

-------------------------------------------------------------------------------

The differences between RepAgents and LTMs are discussed in Section 2.5.
Which then to choose. There are pros and cons to both, however, I think that
it should be stated up front that RepAgents are the latest offering and I
believe that Sybase would expect you you to use that. Certainly the
documentation for LTMs is a little buried implying that they do not consider it
to be as current as LTMs.

LTM Cons:

* Older technology. Not sure if it is being actively supported.
* Not integrated within ASE, so there is a (small) performance penalty.
* Separate processes, so need additional monitoring in production
environments.

LTM Pros:

* Possible to restart LTM without having to restart ASE.

RepAgent Cons

* If it crashes it is possible that you will have to restart ASE in order to
restart RepAgent.

RepAgent Pros

* Latest, and presumably greatest, offering.
* Tightly integrated with ASE so good performance.
* Less to manage, no extra entries in the interfaces file.

Back to top

-------------------------------------------------------------------------------

next prev ASE FAQ

Replication Server Trouble Shooting

3.1 Why am I running out of locks on the replicate side?
3.2 Someone was playing with replication and now the transaction log on
OLTP is filling.

next prev ASE FAQ

-------------------------------------------------------------------------------

3.1 Why am I running out of locks on the replicate side?

-------------------------------------------------------------------------------

Sybase replication works by taking each transaction that occurs in the primary
dataserver and applying to the replicate. Since replication works on the
transaction log, a single, atomic, update on the primary side that updates a
million rows will be translated into a million single row updates. This may
seem very strange but is a simple consequence of how it works. On the primary,
this million row update will attempt to escalate the locks that it has taken
out to an exclusive table lock. However, on the replicate side each row is
updated individually, much as if they were being updated within a cursor loop.
Now, Sybase only tries to escalate locks from a single atomic statement (see
ASE Qx.y), so it will never try to escalate the lock. However, since the
updates are taking place within a single transaction, Sybase will need to take
out enough page locks to lock the million rows.

So, how much should you increase the locks parameter on the replicate side? A
good rule of thumb might be double it or add 40,000 whichever is the larger.
This has certainly worked for us.

Back to top

-------------------------------------------------------------------------------

3.2 Someone was playing with replication and now the transaction log on OLTP
is filling.

-------------------------------------------------------------------------------

Once replication has been configured, ASE adds another marker to the
transaction log. The first marker is the conventional one that marks which
transactions have had their data written to disk. The second is there to
ensure that the transactions have also been replicated. Clearly, if someone
installed replication and did not clean up properly after themselves, this
marker will still be there and consequently the transaction log will be filling
up. If you are certain that replication is not being used on your system, you
can disable the secondary truncation marker with the following commands:

1> use <database>
2> go
1> dbcc settrunc(ltm, ignore)
2> go

The above code is the normal mechanism for disabling the trucation point. I
have never had a problem with it. However, an alternative mechanism for
disabling the truncation point is given below. I do not know if it will work
in situations that the previous example won't, or if it works for databases
that are damaged or what. If someone knows when you use it and why, please let
me know (mailto:do...@midsomer.org).

1> sp_role "grant", sybase_ts_role, sa
2> go
1> set role sybase_ts_role on
2> go
1> dbcc dbrepair(dbname, ltmignore)
2> go
1> sp_role "revoke", sybase_ts_role, sa
2> go

This scenario is also very common if you load a copy of your replicated
production database into development.

Back to top

-------------------------------------------------------------------------------

next prev ASE FAQ

Additional Information/Links



4.1 Links
4.2 Newsgroups

next prev ASE FAQ

-------------------------------------------------------------------------------

4.1 Links

-------------------------------------------------------------------------------

Thierry Antinolfi has a replication FAQ at his site http://pro.wanadoo.fr/
dbadevil that covers a lot of good stuff.

Rob Verschoor has a 'Replication Server Tips & Tricks' section on his site, as
well as an indispensible quick reference guide!

Back to top

-------------------------------------------------------------------------------

4.2 Newsgroups

-------------------------------------------------------------------------------

There are a number of newsgroups that can deal with questions. Sybase have
several in their own forums area.

For Replication Server:

sybase.public.rep-server
sybase.public.rep-agent

for SQL Remote and the issues of replicating with ASA:

sybase.public.sqlanywhere.replication

and of course, there is always the ubiquitous

comp.databases.sybase.

Back to top

-------------------------------------------------------------------------------

next prev ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:03 AM4/20/04
to
Archive-name: databases/sybase-faq/part4

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.
Sybase Frequently Asked Questions


Sybase FAQ Home PageAdaptive Server Enterprise FAQAdaptive Server Anywhere FAQ

Repserver FAQSearch the FAQ
[bar]

Adaptive Server Enterprise

0. What's in a name?
1. ASE Administration
1.1 Basic Administration
1.2 User Database Administration
1.3 Advanced Administration
1.4 General Troubleshooting
1.5 Performance and Tuning
1.6 Server Monitoring
2. Platform Specific Issues
2.1 Solaris
2.2 NT
2.3 Linux
3. DBCC's
4. isql
5. bcp
6. SQL Development
6.1 SQL Fundamentals
6.2 SQL Advanced
6.3 Useful SQL Tricks
7. Open Client
9. Freeware
10. Sybase Technical News
11. Additional Information
12. Miscellany

-------------------------------------------------------------------------------

What's in a name?

Throughout this FAQ you will find references to SQL Server and, starting with
this release, ASE or Adaptive Server Enterprise to give it its full name. You
might also be a little further confused, since Microsoft also seem to have a
product called SQL Server.

Well, back at about release 4.2 of Sybase SQL Server, the products were exactly
the same. Microsoft were to do the port to NT. Well, it is pretty well
documented, but there was a falling out. Both companies kept the same name for
their data servers and confusion began to reign. In an attempt to try and sort
this out, Sybase renamed their product Adaptive Server Enterprise (ASE)
starting with version 11.5.

I found this quote in a Sybase manual the other day:

Since changing the name of Sybase SQL Server to Adaptive Server Enterprise,
Sybase uses the names Adaptive Server and Adaptive Server Enterprise to refer
collectively to all supported versions of the Sybase SQL Server and Adaptive
Server Enterprise. Version-specific references to Adaptive Server or SQL Server
include version numbers.

I will endeavour to try and do the same within the FAQ, but the job is far from
complete!

Back to Top

Basic ASE Administration

1.1.1 What is SQL Server and ASE anyway?
1.1.2 How do I start/stop ASE when the CPU reboots?
1.1.3 How do I move tempdb off of the master device?
1.1.4 How do I correct timeslice -201?
1.1.5 The how's and why's on becoming Certified.
1.1.6 RAID and Sybase
1.1.7 How to swap a db device with another
1.1.8 Server naming and renaming
1.1.9 How do I interpret the tli strings in the interface file?
1.1.10 How can I tell the datetime my Server started?
1.1.11 Raw partitions or regular files?
1.1.12 Is Sybase Y2K (Y2000) compliant?
1.1.13 How can I run the ASE upgrade manually?
1.1.14 We have lost the sa password, what can we do?
1.1.15 How do I set a password to be null?
1.1.16 Does Sybase support Row Level Locking?
1.1.17 What platforms does ASE run on?
1.1.18 How do I backup databases > 64G on ASE prior to 12.x?

User Database Administration # ASE FAQ

-------------------------------------------------------------------------------

1.1.1: What is SQL Server and ASE?

-------------------------------------------------------------------------------

Overview

Before Sybase System 10 (as they call it) we had Sybase 4.x. Sybase System 10
has some significant improvements over Sybase 4.x product line. Namely:

* the ability to allocate more memory to the dataserver without degrading its
performance.
* the ability to have more than one database engine to take advantage of
multi-processor cpu machines.
* a minimally intrusive process to perform database and transaction dumps.

Background and More Terminology

A ASE (SQL Server) is simply a Unix process. It is also known as the database
engine. It has multiple threads to handle asynchronous I/O and other tasks. The
number of threads spawned is the number of engines (more on this in a second)
times five. This is the current implementation of Sybase System 10, 10.0.1 and
10.0.2 on IRIX 5.3.

Each ASE allocates the following resources from a host machine:

* memory and
* raw partition space.

Each ASE can have up to 255 databases. In most implementations the number of
databases is limited to what seems reasonable based on the load on the ASE.
That is, it would be impractical to house all of a large company's databases
under one ASE because the ASE (a Unix process) will become overloaded.

That's where the DBAs experience comes in with interrogation of the user
community to determine how much activity is going to result on a given database
or databases and from that we determine whether to create a new ASE or to house
the new database under an existing ASE. We do make mistakes (and businesses
grow) and have to move databases from one ASE to another. At times ASEs need to
move from one CPU server to another.

With Sybase System 10, each ASE can be configured to have more than one engine
(each engine is again a Unix process). There's one primary engine that is the
master engine and the rest of the engines are subordinates. They are assigned
tasks by the master.

Interprocess communication among all these engines is accomplished with shared
memory.

Some times when a DBA issues a Unix kill command to extinguish a maverick
ASE, the subordinate engines are forgotten. This leaves the shared memory
allocated and eventually we may get in to situations where swapping occurs
because this memory is locked. To find engines that belong to no master
ASE, simple look for engines owned by /etc/init (process id 1). These
engines can be killed -- this is just FYI and is a DBA duty.

Before presenting an example of a ASE, some other topics should be covered.

Connections

An ASE has connections to it. A connection can be viewed as a user login but
it's not necessarily so. That is, a client (a user) can spark up multiple
instances of their application and each client establishes its own connection
to the ASE. Some clients may require two or more per invocation. So typically
DBA's are only concerned with the number of connections because the number of
users typically does not provide sufficient information for us to do our job.

Connections take up ASE resources, namely memory, leaving less memory for
the ASEs' available cache.

ASE Buffer Cache

In Sybase 4.0.1 there was a limit to the amount of memory that could be
allocated to a ASE. It was around 80MB, with 40MB being the typical max. This
was due to internal implementations of Sybase's data structures.

With Sybase System 10 there really was no limit. For instance, we had an ASE
cranked up to 300MB under 10. With System 11 and 12 this has been further
extended. ASE's with 4G bytes of memory are not uncommon. I have not heard of
an 11.9.3 or a 12 server with more that 4G bytes, but I am sure that they are
not far away.

The memory in an ASE is primarily used to cache data pages from disk. Consider
that the ASE is a light weight Operating System: handling user (connections),
allocating memory to users, keeping track of which data pages need to be
flushed to disk and the sort. Very sophisticated and complex. Obviously if a
data page is found in memory it's much faster to retrieve than going out to
disk.

Each connection takes away a little bit from the available memory that is used
to cache disk pages. Upon startup, the ASE pre-allocates the memory that is
needed for each connection so it's not prudent to configure 500 connections
when only 300 are needed. We'd waste 200 connections and the memory associated
with that. On the other hand, it is also imprudent to under configure the
number of connections; users have a way of soaking up a resource (like an ASE)
and if users have all the connections a DBA cannot get into the server to
allocate more connections.

One of the neat things about an ASE is that it reaches (just like a Unix
process) a working set. That is, upon startup it'll do a lot of physical I/O's
to seed its cache, to get lookup information for typical transactions and the
like. So initially, the first users have heavy hits because their requests have
to be performed as a physical I/O. Subsequent transactions have less physical I
/O and more logical I/O's. Logical I/O is an I/O that is satisfied in the ASEs'
buffer cache. Obviously, this is the preferred condition.

DSS vs OLTP

We throw around terms like everyone is supposed to know this high tech lingo.
The problem is that they are two different animals that require a ASE to be
tuned accordingly for each.

Well, here's the low down.

DSS
Decision Support System
OLTP
Online Transaction Processing

What do these mean? OLTP applications are those that have very short orders of
work for each connection: fetch this row and with the results of it update one
or two other rows. Basically, small number of rows affected per transaction in
rapid sucession, with no significant wait times between operations in a
transaction.

DSS is the lumbering elephant in the database world (unless you do some
tricks... out of this scope). DSS requires a user to comb through gobs of data
to aggregate some values. So the transactions typically involve thousands of
rows. Big difference than OLTP.

We never want to have DSS and OLTP on the same ASE because the nature of OLTP
is to grab things quickly but the nature of DSS is to stick around for a long
time reading tons of information and summarizing the results.

What a DSS application does is flush out the ASE's data page cache because of
the tremendous amount of I/O's. This is obviously very bad for OTLP
applications because the small transactions are now hurt by this trauma. When
it was only OLTP a great percentage of I/O was logical (satisfied in the
cache); now transactions must perform physical I/O.

That's why it's good not to mix DSS and OLTP if at all possible.

If mixing them cannot be avoided, then you need to think carefully about how
you configure your server. Use named data caches to ensure that the very
different natures of OLTP and DSS do not conflict with each other. If you
tables that are shared, consider using dirty reads for the DSS applications if
at all possible, since this will help not to block the OLTP side.

Asynchronous I/O

Why async I/O? The idea is that in a typical online transaction processing
(OLTP) application, you have many connections (over 200 connections) and short
transactions: get this row, update that row. These transactions are typically
spread across different tables of the databases. The ASE can then perform each
one of these asynchronously without having to wait for others to finish. Hence
the importance of having async I/O fixed on our platform.

Engines

Sybase System 10 can have more than one engine (as stated above). Sybase has
trace flags to pin the engines to a given CPU processor but we typically don't
do this. It appears that the master engine goes to processor 0 and subsequent
subordinates to the next processor.

Currently, Sybase does not scale linearly. That is, five engines do not make
Sybase perform five times as fast however we do max out with four engines.
After that performance starts to degrade. This is supposed to be fixed with
Sybase System 11.

Putting Everything Together

As previously mentioned, an ASE is a collection of databases with connections
(that are the users) to apply and retrieve information to and from these
containers of information (databases).

The ASE is built and its master device is typically built over a medium sized
(50MB) raw partition. The tempdb is built over a cooked (regular - as opposed
to a raw device) file system to realize any performance gains by buffered
writes. The databases themselves are built over the raw logical devices to
ensure their integrity. (Note: in System 12 you can use the dsync flag to
ensure that writes to file system devices are secure.

Physical and Logical Devices

Sybase likes to live in its own little world. This shields the DBA from the
outside world known as Unix, VMS or NT. However, it needs to have a conduit to
the outside world and this is accomplished via devices.

All physical devices are mapped to logical devices. That is, given a physical
device (such as /lv1/dumps/tempdb_01.efs or /dev/rdsk/dks1ds0) it is mapped by
the DBA to a logical device. Depending on the type of the device, it is
allocated, by the DBA, to the appropriate place (vague enough?).

Okay, let's try and clear this up...

Dump Device

The DBA may decide to create a device for dumping the database nightly. The DBA
needs to create a dump device.

We'll call that logically in the database datadump_for_my_db but we'll map it
to the physical world as /lv1/dumps/in_your_eye.dat So the DBA will write a
script that connects to the ASE and issues a command like this:

dump database my_stinking_db to datadump_for_my_db
go

and the backupserver (out of this scope) takes the contents of my_stinking_db
and writes it out to the disk file /lv1/dumps/in_your_eye.dat

That's a dump device. The thing is that it's not preallocated. This special
device is simply a window to the operating system.

Data and Log Devices

Ah, now we are getting into the world of pre-allocation. Databases are built
over raw partitions. The reason for this is because Sybase needs to be
guaranteed that all its writes complete successfully. Otherwise, if it posted
to a file system buffer (as in a cooked file system) and the machine crashed,
as far as Sybase is concerned the write was committed. It was not, however, and
integrity of the database was lost. That is why Sybase needs raw partitions.
But back to the matter at hand...

When building a new ASE, the DBA determines how much space they'll need for all
the databases that will be housed in this ASE.

Each production database is composed of data and log.

The data is where the actual information resides. The log is where the changes
are kept. That is, every row that is updated/deleted/inserted gets placed into
the log portion then applied to the data portion of the database.

That's why DBA strives to place the raw devices for logs on separate disks
because everything has to single thread through the log.

A transaction is a collection of SQL statements (insert/delete/update) that are
grouped together to form a single unit of work. Typically they map very closely
to the business.

I'll quote the Sybase ASE Administration Guide on the role of the log:

The transaction log is a write-ahead log. When a user issues a statement
that would modify the database, ASE automatically writes the changes to the
log. After all changes for a statement have been recorded in the log, they
are written to an in-cache copy of the data page. The data page remains in
cache until the memory is needed for another database page. At that time,
it is written to disk. If any statement in a transaction fails to complete,
ASE reverses all changes made by the transaction. ASE writes an "end
transaction" record to the log at the end of each transaction, recording
the status (success or failure) of the transaction

As such, the log will grow as user connections affect changes to the database.
The need arises to then clear out the log of all transactions that have been
flushed to disk. This is performed by issuing the following command:

dump transaction my_stinking_db to logdump_for_my_db
go

The ASE will write to the dumpdevice all transactions that have been committed
to disk and will delete the entries from its copy, thus freeing up space in the
log. Dumping of the transaction logs is accomplished via cron (the Unix
scheduler, NT users would have to resort to at or some third party tool) . We
schedule the heavily hit databases every 20 minutes during peak times.

A single user can fill up the log by having begin transaction with no
corresponding commit/rollback transaction. This is because all their
changes are being applied to the log as an open-ended transaction, which is
never closed. This open-ended transaction cannot be flushed from the log,
and therefore grows until it occupies all of the free space on the log
device.

And the way we dump it is with a dump device. :-)

An Example

If the DBA has four databases to plop on this ASE and they need a total of
800MB of data and 100MB of log (because that's what really matters to us), then
they'd probably do something like this:

1. allocate sufficient raw devices to cover the data portion of all the
databases
2. allocate sufficient raw devices to cover the log portion of all the
databases
3. start allocating the databases to the devices.

For example, assuming the following database requirements:

Database
Requirements
+-----------------+
| | | |
|----+------+-----|
| DB | Data | Log |
|----+------+-----|
|----+------+-----|
| a | 300 | 30 |
|----+------+-----|
| b | 400 | 40 |
|----+------+-----|
| c | 100 | 10 |
+-----------------+

and the following devices:

Devices
+---------------------------------+
| Logical | Physical | Size |
|---------------+----------+------|
| | /dev/ | |
| dks3d1s2_data | rdsk/ | 500 |
| | dks3d1s2 | |
|---------------+----------+------|
| | /dev/ | |
| dks4d1s2_data | rdsk/ | 500 |
| | dks4d1s2 | |
|---------------+----------+------|
| | /dev/ | |
| dks5d1s0_log | rdsk/ | 200 |
| | dks5d1s0 | |
+---------------------------------+

then the DBA may elect to create the databases as follows:

create database a on dks3d1s2_data = 300 log on dks5d1s0_log = 30
create database b on dks4d1s2_data = 400 log on dks5d1s0_log = 40
create database c on dks3d1s2_data = 50, dks4d1s2_data = 50 log on dks5d1s0_log = 10

Some of the devices will have extra space available because out database
allocations didn't use up all the space. That's fine because it can be used for
future growth. While the Sybase ASE is running, no other Sybase ASE can
re-allocate these physical devices.

TempDB

TempDB is simply a scratch pad database. It gets recreated when a SQL Server is
rebooted. The information held in this database is temporary data. A query may
build a temporary table to assist it; the Sybase optimizer may decide to create
a temporary table to assist itself.

Since this is an area of constant activity we create this database over a
cooked file system which has historically proven to have better performance
than raw - due to the buffered writes provided by the Operating System.

Port Numbers

When creating a new ASE, we allocate a port to it (currently, DBA reserves
ports 1500 through 1899 for its use). We then map a host name to the different
ports: hera, fddi-hera and so forth. We can actually have more than one port
number for an ASE but we typically don't do this.

Back to top

-------------------------------------------------------------------------------

1.1.2: How to start/stop ASE when CPU reboots

-------------------------------------------------------------------------------

Below is an example of the various files (on Irix) that are needed to start/
stop an ASE. The information can easily be extended to any UNIX platform.

The idea is to allow as much flexibility to the two classes of administrators
who manage the machine:

* The System Administrator
* The Database Administrator

Any errors introduced by the DBA will not interfere with the System
Administrator's job.

With that in mind we have the system startup/shutdown file /etc/init.d/sybase
invoking a script defined by the DBA: /usr/sybase/sys.config/
{start,stop}.sybase

/etc/init.d/sybase

On some operating systems this file must be linked to a corresponding entry in
/etc/rc.0 and /etc/rc.2 -- see rc0(1M) and rc2(1M)

#!/bin/sh
# last modified: 10/17/95, sr.
#
# Make symbolic links so this file will be called during system stop/start.
# ln -s /etc/init.d/sybase /etc/rc0.d/K19sybase
# ln -s /etc/init.d/sybase /etc/rc2.d/S99sybase
# chkconfig -f sybase on

# Sybase System-wide configuration files
CONFIG=/usr/sybase/sys.config

if $IS_ON verbose ; then # For a verbose startup and shutdown
ECHO=echo
VERBOSE=-v
else # For a quiet startup and shutdown
ECHO=:
VERBOSE=
fi

case "$1" in
'start')
if $IS_ON sybase; then
if [ -x $CONFIG/start.sybase ]; then
$ECHO "starting Sybase servers"
/bin/su - sybase -c "$CONFIG/start.sybase $VERBOSE &"
else
<error condition>
fi
fi
;;

'stop')
if $IS_ON sybase; then
if [ -x $CONFIG/stop.sybase ]; then
$ECHO "stopping Sybase servers"
/bin/su - sybase -c "$CONFIG/stop.sybase $VERBOSE &"
else
<error condition>
fi
fi
;;

*)
echo "usage: $0 {start|stop}"
;;
esac

/usr/sybase/sys.config/{start,stop}.sybase

start.sybase

#!/bin/sh -a
#
# Script to start sybase
#
# NOTE: different versions of sybase exist under /usr/sybase/{version}
#
# Determine if we need to spew our output
if [ "$1" != "spew" ] ; then
OUTPUT=">/dev/null 2>&1"
else
OUTPUT=""
fi
# 10.0.2 servers
HOME=/usr/sybase/10.0.2
cd $HOME
# Start the backup server
eval install/startserver -f install/RUN_BU_KEPLER_1002_52_01 $OUTPUT
# Start the dataservers
# Wait two seconds between starts to minimize trauma to CPU server
eval install/startserver -f install/RUN_FAC_WWOPR $OUTPUT
sleep 2
eval install/startserver -f install/RUN_MAG_LOAD $OUTPUT
exit 0

stop.sybase

#!/bin/sh
#
# Script to stop sybase
#
# Determine if we need to spew our output
if [ -z "$1" ] ; then
OUTPUT=">/dev/null 2>&1"
else
OUTPUT="-v"
fi
eval killall -15 $OUTPUT dataserver backupserver sybmultbuf
sleep 2
# if they didn't die, kill 'em now...
eval killall -9 $OUTPUT dataserver backupserver sybmultbuf
exit 0

If your platform doesn't support killall, it can easily be simulated as
follows:

#!/bin/sh
#
# Simple killall simulation...
# $1 = signal
# $2 = process_name
#
#
# no error checking but assume first parameter is signal...
# what ya want for free? :-)
#
kill -$1 `ps -ef | fgrep $2 | fgrep -v fgrep | awk '{ print $1 }'`

Back to top

-------------------------------------------------------------------------------

1.1.3: How do I move tempdb off of the Master Device?

-------------------------------------------------------------------------------

There used to be a section in the FAQ describing how to drop all of tempdb's
devices physically from the master device. This can make recovery of the
server impossible in case of a serious error and so it strongly recommended
that you do not do this but simply drop the segments as outlined below.

Sybase TS Preferred Method of Moving tempdb off the Master Device.

This is the Sybase TS method of removing most activity from the master device:

1. Alter tempdb on another device:
1> alter database tempdb on ...
2> go
2. Use the tempdb:
1> use tempdb
2> go
3. Drop the segments:
1> sp_dropsegment "default", tempdb, master
2> go
1> sp_dropsegment "logsegment", tempdb, master
2> go
1> sp_dropsegment "system", tempdb, master
2> go

Note that there is still some activity on the master device. On a three
connection test that I ran:

while ( 1 = 1 )
begin
create table #x (col_a int)
drop table #x
end

there was one write per second. Not bad.

An Alternative

(I recently did some bench marks comparing this method, the previous method
and a combination of both. According to sp_sysmon there was no difference
in activity at all. I leave it here just in case it proves useful to
someone.)

The idea of this handy script is to simply fill the first 2MB of tempdb thus
effectively blocking anyone else from using it. The slight gotcha with this
script, since we're using model, is that all subsequent database creates will
also have tempdb_filler installed. This is easily remedied by dropping the
table after creating a new database.

This script works because tempdb is rebuilt every time the ASE is rebooted.
Very nice trick!

/* this isql script creates a table in the model database. */
/* Since tempdb is created from the model database when the */
/* server is started, this effectively moves the active */
/* portion of tempdb off of the master device. */

use model
go

/* note: 2k row size */
create table tempdb_filler(
a char(255) not null,
b char(255) not null,
c char(255) not null,
d char(255) not null,
e char(255) not null
)
go

/* insert 1024 rows */
declare @i int
select @i = 1
while (@i <= 1024)
begin
insert into tempdb_filler values('a','b','c','d','e')
if (@i % 100 = 0) /* dump the transaction every 100 rows */
dump tran model with truncate_only
select @i=@i+1
end
go

Back to top

-------------------------------------------------------------------------------

1.1.4: How do I correct timeslice -201

-------------------------------------------------------------------------------

(Note, this procedure is only really necessary with pre-11.x systems. In
system 11 systems, these parameters are tunable using sp_configure.)

Why Increase It?

Basically, it will allow a task to be scheduled onto the CPU for a longer time.
Each task on the system is scheduled onto the CPU for a fixed period of time,
called the timeslice, during which it does some work, which is resumed when its
next turn comes around.

The process has up until the value of ctimemax (a config block variable) to
finish its task. As the task is working away, the scheduler counts down
ctimemax units. When it gets to the value of ctimemax - 1, if it gets stuck and
for some reason cannot be taken off the CPU, then a timeslice error gets
generated and the process gets infected.

On the other hand, ASE will allow a server process to run as long as it needs
to. It will not swap the process out for another process to run. The process
will decide when it is "done" with the server CPU. If, however, a process goes
on and on and never relinquishes the server CPU, then Server will timeslice the
process.

Potential Fix

1. Shutdown the ASE
2. %buildmaster -dyour_device -yctimemax=2000
3. Restart your ASE. If the problem persists contact Sybase Technical Support
notifying them what you have done already.

Back to top

-------------------------------------------------------------------------------

1.1.5: Certified Sybase Professional

-------------------------------------------------------------------------------

There have been changes in the process of becoming a Sybase Certified
Professional. There's a very informative link at http://www.sybase.com/
education/profcert, Professional Certification.

Rob Verschoor has put together some good stuff on his pages ( http://
www.euronet.nl/~syp_rob/certtips.html) that have pretty much all that you need
to know. He also has a quiz which is intended to test each and everyone's
knowledge of ASE and RepServer.

Sybase have released some sample questions (look for them at http://
www.sybase.com/education/). The GUI requires MS Windows (at the time of
writing), but they are definitely a sample of what you will be asked. There are
also a couple of CDs available with yet more questions on them.

The Certification Kickback

There have been a couple of articles recently covering the kickback that seems
to be happening as far as certification is concerned. Serveral HR people have
said that if a person's CV (resume) is sent in covered in certifications then
it goes straight into the bit bucket. I do not know if this is true or not, but
one thing that you might wish to consider is the preparation of two CVs, one
with certifications, one without. If the job request specifies certification is
necessary, then send in the appropriate CV. If it does not specifiy
certification, send in the clean version. If you go into the interview for a
job that did not specify certifications up front and the interviewer starts
going about you not being certificated, you simply produce your card as proof.

-------------------------------------------------------------------------------

1.1.6: RAID and Sybase

-------------------------------------------------------------------------------

Here's a short summary of what you need to know about Sybase and RAID.

The newsgroup comp.arch.storage has a detailed FAQ on RAID, but here are a few
definitions:

RAID

RAID means several things at once. It provides increased performance through
disk striping, and/or resistance to hardware failure through either mirroring
(fast) or parity (slower but cheaper).

RAID 0

RAID 0 is just striping. It allows you to read and write quickly, but provides
no protection against failure.

RAID 1

RAID 1 is just mirroring. It protects you against failure, and generally reads
and writes as fast as a normal disk. It uses twice as many disks as normal (and
sends twice as much data across your SCSI bus, but most machines have plenty of
extra capacity on their SCSI busses.)

Sybase mirroring always reads from the primary copy, so it does not
increase read performance.

RAID 0+1

RAID 0+1 (also called RAID 10) is striping and mirroring together. This gives
you the highest read and write performance of any of the raid options, but uses
twice as many disks as normal.

RAID 4/RAID 5

RAID 4 and 5 have disk striping and use 1 extra disk to provide parity. Various
vendors have various optimizations, but this RAID level is generally much
slower at writes than any other kind of RAID.

RAID 7

I am not sure if this is a genuine RAID standard, further checking on your part
is required.

Details

Most hardware RAID controllers also provide a battery-backed RAM cache for
writing. This is very useful, because it allows the disk to claim that the
write succeeded before it has done anything. If there is a power failure, the
information will (hopefully) be written to disk when the power is restored. The
cache is very important because database log writes cause the process doing the
writes to stop until the write is successful. Systems with write caching thus
complete transactions much more quickly than systems without.

What RAID levels should my data, log, etc be on? Well, the log disk is
frequently written, so it should not be on RAID 4 or 5. If your data is
infrequently written, you could use RAID 4 or 5 for it, because you don't mind
that writes are slow. If your data is frequently written, you should use RAID
0+1 for it. Striping your data is a very effective way of avoiding any one disk
becoming a hot-spot. Traditionally Sybase databases were divided among devices
by a human attempting to determine where the hot-spots are. Striping does this
in a straight-forward fashion, and also continues to work if your data access
patterns change.

Your tempdb is data but it is frequently written, so it should not be on RAID 4
or 5.

If your RAID controller does not allow you to create several different kinds of
RAID volumes on it, then your only hope is to create a huge RAID 0+1 set. If
your RAID controller does not support RAID 0+1, you shouldn't be using it for
database work.

Back to top

-------------------------------------------------------------------------------

1.1.7: How to swap a db device with another

-------------------------------------------------------------------------------

Here are four approaches. Before attempting any of the following: Backup,
Backup, Backup.

Dump and Restore

1. Backup the databases on the device, drop the databases, drop the devices.
2. Rebuild the new devices.
3. Rebuild the databases (Make sure you recreate the fragments correctly - See
Ed Barlow's scripts (http://www.tiac.net/users/sqltech/) for an sp that
helps you do this if you've lost your notes. Failure to do this will
possibly lead to data on log segments and log on data segments).
4. Reload the database dumps!

Twiddle the Data Dictionary - for brave experts only.

1. Shut down the server.
2. Do a physical dump (using dd(1), or such utility) of the device to be
moved.
3. Load the dump to the new device
4. Edit the data dictionary (sysdevices.physname) to point to the new device.

The Mirror Trick

1. Create a mirror of the old device, on the new device.
2. Unmirror the primary device, thereby making the _backup_ the primary
device.
3. Repeat this for all devices until the old disk is free.

dd (Unix only)

(This option is no use if you need to move a device now, rather if you
anticipate moving a device at some point in the future.)

You may want to use this approach for creating any database.

Create (or use) a directory for symbolic links to the devices you wish to use.
Then create your database, but instead of going to /dev/device, go to /
directory/symlink - When it comes time to move your devices, you shut down the
server, simply dd(1) the data from the old device to the new device, recreate
the symbolic links to the new device and restart the ASE. Simple as that.

Backups are a requisite in all cases, just in case.

Back to top

-------------------------------------------------------------------------------

1.1.8: Server naming and renaming

-------------------------------------------------------------------------------

There are three totally separate places where ASE names reside, causing much
confusion.

ASE Host Machine interfaces File

A master entry in here for server TEST will provide the network information
that the server is expected to listen on. The -S parameter to the dataserver
executable tells the server which entry to look for, so in the RUN_TEST file,
-STEST will tell the dataserver to look for the entry under TEST in the
interfaces file and listen on any network parameters specified by 'master'
entries.

TEST
master tcp ether hpsrv1 1200
query tcp ether hpsrv1 1200


Note that preceding the master/query entries there's a tab.

This is as far as the name TEST is used. Without further configuration the
server does not know its name is TEST, nor do any client applications.
Typically there will also be query entries under TEST in the local interfaces
file, and client programs running on the same machine as the server will pick
this connection information up. However, there is nothing to stop the query
entry being duplicated under another name entirely in the same interfaces file.

ARTHUR
query tcp ether hpsrv1 1200

isql -STEST or isql -SARTHUR will connect to the same server. The name is
simply a search parameter into the interfaces file.

Client Machine interfaces File

Again, as the server name specified to the client is simply a search parameter
for Open Client into the interfaces file, SQL.INI or WIN.INI the name is
largely irrelevant. It is often set to something that means something to the
users, especially where they might have a choice of servers to connect to. Also
multiple query entries can be set to point to the same server, possibly using
different network protocols. eg. if TEST has the following master entries on
the host machine:

TEST
master tli spx /dev/nspx/ \xC12082580000000000012110
master tcp ether hpsrv1 1200

Then the client can have a meaningful name:

ACCOUNTS_TEST_SERVER
query tcp ether hpsrv1 1200

or alternative protocols:

TEST_IP
query tcp ether hpsrv1 1200
TEST_SPX
query tli spx /dev/nspx/ \xC12082580000000000012110

sysservers

This system table holds information about remote ASEs that you might want to
connect to, and also provides a method of naming the local server.

Entries are added using the sp_addserver system procedure - add a remote server
with this format:

sp_addserver server_name, null, network_name

server_name is any name you wish to refer to a remote server by, but
network_name must be the name of the remote server as referenced in the
interfaces file local to your local server. It normally makes sense to make the
server_name the same as the network_name, but you can easily do:

sp_addserver LIVE, null, ACCTS_LIVE

When you execute for example, exec LIVE.master..sp_helpdb the local ASE will
translate LIVE to ACCTS_LIVE and try and talk to ACCTS_LIVE via the ACCTS_LIVE
entry in the local interfaces file.

Finally, a variation on the sp_addserver command:

sp_addserver LOCALSRVNAME, local

names the local server (after a restart). This is the name the server reports
in the errorlog at startup, the value returned by @@SERVERNAME, and the value
placed in Open Client server messages. It can be completely different from the
names in RUN_SRVNAME or in local or remote interfaces - it has no bearing on
connectivity matters.

Back to top

-------------------------------------------------------------------------------

1.1.9: How do I interpret the tli strings in the interface file?

-------------------------------------------------------------------------------

The tli string contained with Solaris interface files is a hex string
containing port and IP address. If you have an entry

SYBSRVR
master tli tcp /dev/tcp \x000204018196c4510000000000000000

Then it can be interpreted as follows:

x0002 no user interpretation (header info?)
0401 port number (1025 decimal)
81 first part of IP address (129 decimal)
96 second part of IP address (150 decimal)
c4 third part of IP address (196 decimal)
51 fourth part of IP address (81 decimal)

So, the above tli address is equivalent to

SYBSRVR
master tcp ether sybhost 1025

where sybhost's IP address is 129.150.196.81.

The following piece of Sybperl (courtesy of Michael Peppler) takes a tli entry
and returns the IP address and port number for each server in a Solaris'
interfaces file.

#!/usr/local/bin/perl -w

use strict;

my $server;
my @dat;
my ($port, $ip);

while(<>) {
next if /^\s*$/;
next if /^\s*\#/;
chomp;
if(/^\w/) {
$server = $_;
$server =~ s/\s*$//;
next;
}

@dat = split(' ', $_);
($port, $ip) = parseAddress($dat[4]);
print "$server - $dat[0] on port $port, host $ip\n";
}

sub parseAddress {
my $addr = shift;

my $port;
my $ip;

my (@arr) = (hex(substr($addr, 10, 2)),

hex(substr($addr, 12, 2)),

hex(substr($addr, 14, 2)),

hex(substr($addr, 16, 2)));
$port = hex(substr($addr, 6, 4));
$ip = join('.', @arr);

($port, $ip);
}

Back to top

-------------------------------------------------------------------------------

1.1.10: How can I tell the datetime my Server started?

-------------------------------------------------------------------------------

Method #1

The normal way would be to look at the errorlog, but this is not always
convenient or even possible. From a SQL session you find out the server startup
time to within a few seconds using:

select "Server Start Time" = crdate
from master..sysdatabases
where name = "tempdb"

Method #2

Another useful query is:

select * from sysengines

which gives the address and port number at which the server is listening.

Back to top

-------------------------------------------------------------------------------

1.1.11: Raw partitions or regular files?

-------------------------------------------------------------------------------

Hmmm... as always, this answer depends on the vendor's implementation on a
cooked file system for the ASE...

Performance Hit (synchronous vs asynchronous)

If on this platform, the ASE performs file system I/O synchronously then the
ASE is blocked on the read/write and throughput is decreased tremendously.

The way the ASE typically works is that it will issue an I/O (read/write) and
save the I/O control block and continue to do other work (on behalf of other
connections). It'll periodically poll the workq's (network, I/O) and resume
connections when their work has completed (I/O completed, network data
xmit'd...).

Performance Hit (bcopy issue)

Assuming that the file system I/O is asynchronous (this can be done on SGI), a
performance hit may be realized when bcopy'ing the data from kernel space to
user space.

Cooked I/O typically (again, SGI has something called directed I/O which allows
I/O to go directly to user space) has to go from disk, to kernel buffers and
from kernel buffers to user space; on a read. The extra layer with the kernel
buffers is inherently slow. The data is moved from kernel buffers to/from user
space using bcopy(). On small operations this typically isn't that much of an
issue but in a RDBMS scenario the bcopy() layer is a significant performance
hit because it's done so often...

Performance Gain!

It's true, using file systems, at times you can get performance gains assuming
that the ASE on your platform does the I/O asynchronously (although there's a
caveat on this too... I'll cover that later on).

If your machine has sufficient memory and extra CPU capacity, you can realize
some gains by having writes return immediately because they're posted to
memory. Reads will gain from the anticipatory fetch algorithm employed by most
O/S's.

You'll need extra memory to house the kernel buffered data and you'll need
extra CPU capacity to allow bdflush() to write the dirty data out to disk...
eventually... but with everything there's a cost: extra memory and free CPU
cycles.

One argument is that instead of giving the O/S the extra memory (by leaving it
free) to give it to the ASE and let it do its caching... but that's a different
thread...

Data Integrity and Cooked File System

If the Sybase ASE is not certified to be used over a cooked file system,
because of the nature of the kernel buffering (see the section above) you may
face database corruption by using cooked file system anyway. The ASE thinks
that it has posted its changes out to disk but in reality it has gone only to
memory. If the machine halts without bdflush() having a chance to flush memory
out to disk, your database may become corrupted.

Some O/S's allow cooked files to have a write through mode and it really
depends if the ASE has been certified on cooked file systems. If it has, it
means that when the ASE opens a device which is on a file system, it fcntl()'s
the device to write-through.

When to use cooked file system?

I typically build my tempdb on cooked file system and I don't worry about data
integrity because tempdb is rebuilt every time your ASE/SQL Server is rebooted.

Back to top

-------------------------------------------------------------------------------

1.1.12: Is Sybase Y2K (Y2000) compliant?

-------------------------------------------------------------------------------

Sybase is year 2000 compliant at specific revisions of each product. Full
details are available at http://www.sybase.com, specifically (as these links
will undoubtedly change):

http://www.sybase.com/success/inc/corpinfo/year2000_int.html
http://www.sybase.com/Company/corpinfo/year2000_matrix.html

Note: Since we have made it to 2000 more or less intact, I see no reason to
include this question. I plan to remove with the next release of the FAQ. If
you feel strongly about leaving it in then let me know.

Back to top

-------------------------------------------------------------------------------

1.1.13 How Can I Run the ASE Upgrade Manually?

-------------------------------------------------------------------------------

How to Run the ASE Upgrade Manually

This document describes the steps required to perform a manual upgrade for ASE
from release 4.x or 10.0x to release 11.02. In most cases, however, you should
use sybinit to perform the upgrade.

BE SURE TO HAVE GOOD BACKUPS BEFORE STARTING THIS PROCEDURE.

1. Use release 11.0x sybinit to run the pre-eligibility test and Check
Reserved words. Make any necessary changes that are mentioned in the
sybinit log. The sybinit log is located in $SYBASE/init/logs/logxxxx.yyy.
2. Use isql to connect to the 4.x or 10.0x ASE and do the following tasks:
a. Turn on option to allow updates to system tables:
1> sp_configure "allow updates", 1
2> go

b. Checkpoint all databases:
1> use "dbname"
2> go
1> checkpoint
2> go

c. Shutdown the 4.x or 10.0x ASE.
1> shutdown
2> go
3. Copy the interfaces file to the release 11.0x directory.
4. Set the environment variable SYBASE to the release 11.0x directory.
5. Copy the runserver file to the release 11.0x $SYBASE/install directory.
6. Edit the $SYBASE/install/RUN_SYBASE (runserver file) to change the path
from the 4.x or 10.x dataserver directory to the new release 11.0x
directory.
7. Start ASE using the new runserver file.
% startserver -f$SYBASE/install/RUN_SYBASE
8. Run the upgrade program:

UNIX: $SYBASE/upgrade/upgrade -S"servername" -P"sapassword" > $SYBASE/init/
logs/mylog.log 2>&1 VMS: SYBASE_SYSTEM[SYBASE.UPGRADE]upgrade /password=
"sa_password" /servername="servername"

9. Shut down SQL server after a successful upgrade.
% isql -Usa -Pxxx
-SSYBASE
1> shutdown
2> go
10. Start ASE using the release 11.0x runserver file.

% startserver -f$SYBASE/install/RUN_SYBASE

11. Create the sybsystemprocs device and database if upgrading from 4.9.x. You
should create a 21mb sybsystemprocs device and database.
a. Use the disk init command to create the sybsytemprocs device and
database manually, for example:

disk init name = "sybprocsdev", physname="/dev/sybase/rel1102/
sybsystemprocs.dat", vdevno=4, size=10752 go To check to see which vdevno
is available: type 1> select distinct low/16777216 from sysdevices 2> order
by low 3> go A sample create database command: create database
sybsystemprocs on sybprocsdev=21 go Please refer to the "Sybase ASE
Reference Manual", for more information on these commands.

12. Run the installmaster and installmodel scripts:
UNIX: %isql -Usa -Psapassword -i$SYBASE/scripts/installmaster
UNIX: %isql -Usa -Psapassword -i$SYBASE/scripts/installmodel
VMS: $isql /user="sa" /password="sapass"
/input="[sybase_system.scripts]installm aster"
VMS: $isql /user="sa" /password="sapass"
/input="[sybase_system.scripts]installm odel"
13. If you upgraded from ASE 4.9.2, you will need to run sp_remap to remap the
compiled objects. Sp_remap remaps stored procedures, triggers, rules,
defaults, or views to be compatible with the current release of ASE. Please
refer to the Reference Manual Volume II for more information on the
sp_remap command.

The syntax for sp_remap:

sp_remap object_name

If you are upgrading to ASE 11.0.x and the upgrade process failed when using
sybinit, you can invoke sybinit and choose remap query tress from the upgrade
menu screen. This is a new option that is added, after a failed upgrade.

Back to top

-------------------------------------------------------------------------------

1.1.14 We have lost the sa password, what can we do?

-------------------------------------------------------------------------------

Remember Douglas Adams famous quote "Don't panic" is the first thing!

I know that most people use the 'sa' account all of the time, which is fine if
there is only ever one dba administering the system. If you have more than one
person accessing the server using the 'sa' account, consider using sa_role
enabled accounts and disabling the 'sa' account. Funnily enough, this is
obviously what Sybase think because it is one of the questions in the
certification exams.

If you see that someone is logged using the 'sa' account or is using an account
with 'sa_role' enabled, then you can do the following:

sp_configure "allow updates to system tables",1
go
update syslogins set password=null where name = 'sa'
go
sp_password null,newPassword
go

You must rememeber to reset the password before exiting isql or sqsh. I thought
that setting it to null would be enough, and exited isql thinking that I would
be able to get in with a null password. Take it from me that the risk is not
worth it. It failed for me and I had to kill the dataserver and get a new
password. I just tried the above method and it works fine.

If you have a user with sso_role enabled, login with that account and change
the 'sa' password that way. It is often a good idea to have a separate site
security officer, just to get you out of this sticky situation. Certainly stops
you looking an idiot in managements eyes for having to reboot production
because you have locked yourself out!

OK, so we have got to the point where there are no accounts with sufficient
priviledges to allow you to change the 'sa' account password. (You are sure
about that, since the next part can cause data loss, so have another quick
look.) We now need to some more drastic stuff.

If the server is actually running, then you need to stop it. We know that the
only accounts that can stop the server in a nice manner are not available, so
it has to be some sort of kill. You can try:

kill -SIGTERM

or

kill -15

(they are identical) which is designed to be caught by ASE, which then performs
the equivalent of shutdown with nowait. If ASE does not die, and you should
give it a little while to catch and act on the signal, then you might have to
try other measures, which is probably kill -9. Note that if you have tables
with identity columns, most of these will jump alarmingly, unless you are using
ASE 12.5 and the identity interval is set to 1.

Once down, edit the RUN_SERVER file ( RUN_SERVER.bat on NT) and add "-psa" (it
is important not to leave a space between the"-p" and the "sa", and that it is
all lower-case) to the end of the dataserver or sqlsrvr.exe line. You will end
up with a file that looks a bit like:

#!/bin/sh
#
# Adaptive Server name: N_UTSIRE
# Master device path: /data/sybase/databases/N_UTSIRE/master.dat
# Error log path: /opt/sybase-11.9.2/install/N_UTSIRE.log
# Directory for shared memory files: /opt/sybase-11.9.2
#
# Regenerate sa password -psa
#
/opt/sybase-11.9.2/bin/dataserver \
-sN_UTSIRE \
-d/data/sybase/databases/N_UTSIRE/master.dat \
-e/opt/sybase-11.9.2/install/N_UTSIRE.log \
-M/opt/sybase-11.9.2 -psa \

(I add the line mentioning the regenerate, so that if I need to do this in a
moment of extreme pressure it is there in front of my nose.

Now, start the server again and you should see the following on the screen:

00:00000:00001:2001/05/26 18:29:21.39 server 'bin_iso_1' (ID = 50)
00:00000:00001:2001/05/26 18:29:21.39 server on top of default character set:
00:00000:00001:2001/05/26 18:29:21.39 server 'iso_1' (ID = 1).
00:00000:00001:2001/05/26 18:29:21.39 server Loaded default Unilib conversion handle.

New SSO password for sa:tmfyrkdwpibung

Note that it is not written to the log file, so keep your eyes peeled.

On NT you will have to start the server from the command line and not use
Sybase Central or the control panel.

Obviously, you will want to change the password to something much more
memorable as soon as possible.

Remember to remove the "-psa" from the "RUN" file before you start the server
again or else the password will be changed again for you.

Back to top

-------------------------------------------------------------------------------

1.1.15 How do I set a password to be null?

-------------------------------------------------------------------------------

Since ASE 11 (I cannot remember if it was with the very first release of 11,
but certainly not before) the password column in syslogins has been encrypted.
Setting this column to NULL does not equate to that login having a NULL
password. A NULL password still requires the correct binary string to be in
place.

In release 12 and above, set the minimum password length to be 0 using
sp_configure and give that account a null password, and all should be fine.

Before 12, it is not possible to set the minimum password length, so the direct
approach is not possible. So, update the relevant record in syslogins setting
the password column to be the same as that of an account with a NULL password
already.

How does one get the correct binary value? When a new ASE is built, the 'sa'
account has a NULL password to start with. Setting an account to have the same
binary value as such an 'sa' account should work. Remember that the binary
string is going to be specific to the operating system and the exact release of
ASE etc. Obviously, if you have set the password of your 'sa' accounts to be
something other than NULL (sensible move), then you are going to have to build
yourself a dummy server just to get the correct string. If this is important to
you, then you may wish to store the value somewhere safe once you have
generated it.

Yet another method would be to simply insert the correct hex string into the
password column. Rob Verschoor has a very nice stored proc on his site called
sp_blank_password to allow you to do just this. Go to http://www.sypron.nl/
blankpwd.html .

Back to top

-------------------------------------------------------------------------------

1.1.16: Does Sybase support Row Level Locking?

-------------------------------------------------------------------------------

With Adaptive Server Enterprise 11.9 Sybase introduced row level locking into
its product. In fact it went further than that, it introduced 3 different
locking levels:

* All Pages Locking

This is the scheme that is implemented in all servers prior to 11.9. Here
locks are taken out at the page level, which may included many rows. The
name refers to the fact that all of the pages in any data manipulation
statement are locked, both data and index.

* Data Page Locking

The other two locking schemes are bundled together under the title Data
Page Locking, refering to the fact that only data pages are ever locked in
the conventional sense. Data Page Locking is divided into two categories
+ Data Only Locking


This locking scheme still locks a page at a time, including all of the
rows contained within that page, but uses a new mechanism, called
latches, to lock index pages for the shortest amount of time. One of
the consequences of this scheme is that it does not update index
pages. In order to support this Sybase has introduced a new concept,
forwarded rows. These are rows that have had to move because they have
grown beyond space allowed for them on the page they were created. 2002
bytes per page.

+ Row Level Locking


Just as it sounds, the lock manager only locks the row involved in the
operation.

Back to top

-------------------------------------------------------------------------------

1.1.17: What platforms does ASE run on?

-------------------------------------------------------------------------------

Sybase has an excellent lookup page that tells you all of the releases that
Sybase has certifies as running on a particular platform. Got to http://
ohno.sybase.com/cgi-bin/ws.exe/cert/ase_cert.hts .

Back to top

-------------------------------------------------------------------------------

1.1.18: How do I backup databases > 64G on ASE prior to 12.x?

-------------------------------------------------------------------------------

As you are all well aware, prior to version of ASE 12, dumping large databases
was a real pain. Tape was the only option for anything greater than 64 gig.
This was because only 32 dump devices, or stripes, were supported, and since
file based stripes were restricted to no more than 2 gig, the total amount of
data that could be dumped was <= 32 * 2 = 64G.

With the introduction of ASE 12, the number of stripes was increased

Back to top

-------------------------------------------------------------------------------

User Database Administration # ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:05 AM4/20/04
to
Archive-name: databases/sybase-faq/part7

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

General Troubleshooting

1. How do I turn off marked suspect on my database?
2. On startup, the transaction log of a database has filled and recovery has
suspended, what can I do?
3. Why do my page locks not get escalated to a table lock after 200 locks?

Performance and Tuning Advanced Administration ASE FAQ

-------------------------------------------------------------------------------

1.4.1 How do I turn off marked suspect on my database?

-------------------------------------------------------------------------------

Say one of your database is marked suspect as the SQL Server is coming up. Here
are the steps to take to unset the flag.

Remember to fix the problem that caused the database to be marked suspect
after switching the flag.

System 11

1. sp_configure "allow updates", 1
2. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") --
save this value.
3. begin transaction
4. update sysdatabases set status = -32768 where dbid = db_id("my_hosed_db")
5. commit transaction
6. shutdown
7. startserver -f RUN_*
8. fix the problem that caused the database to be marked suspect
9. begin transaction
10. update sysdatabases set status = saved_value where dbid = db_id
("my_hosed_db")
11. commit transaction
12. sp_configure "allow updates", 0
13. reconfigure
14. shutdown
15. startserver -f RUN_*

System 10

1. sp_configure "allow updates", 1
2. reconfigure with override
3. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") -
save this value.
4. begin transaction
5. update sysdatabases set status = -32768 where dbid = db_id("my_hosed_db")
6. commit transaction
7. shutdown
8. startserver -f RUN_*
9. fix the problem that caused the database to be marked suspect
10. begin transaction
11. update sysdatabases set status = saved_value where dbid = db_id
("my_hosed_db")
12. commit transaction
13. sp_configure "allow updates", 0
14. reconfigure
15. shutdown
16. startserver -f RUN_*

Pre System 10

1. sp_configure "allow updates", 1
2. reconfigure with override
3. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") -
save this value.
4. begin transaction
5. update sysdatabases set status = -32767 where dbid = db_id("my_hosed_db")
6. commit transaction
7. you should be able to access the database for it to be cleared out. If not:
1. shutdown
2. startserver -f RUN_*
8. fix the problem that caused the database to be marked suspect
9. begin transaction
10. update sysdatabases set status = saved_value where dbid = db_id
("my_hosed_db")
11. commit transaction
12. sp_configure "allow updates", 0
13. reconfigure

Return to top

-------------------------------------------------------------------------------

1.4.2 On startup, the transaction log of a database has filled and recovery has


suspended, what can I do?

-------------------------------------------------------------------------------

You might find the following in the error log:

00:00000:00001:2000/01/04 07:43:42.68 server Can't allocate space for object
'syslogs' in database 'DBbad' because 'logsegment' segment is full/has no free
extents. If you ran out of space in syslogs, dump the transaction log.
Otherwise, use ALTER DATABASE or sp_extendsegment to increase size of the
segment.
00:00000:00001:2000/01/04 07:43:42.68 server Error: 3475, Severity: 21, State:
7
00:00000:00001:2000/01/04 07:43:42.68 server There is no space available in
SYSLOGS for process 1 to log a record for which space has been reserved. This
process will retry at intervals of one minute. The internal error number is -4.

which can prevent ASE from starting properly. A neat solution from Sean Kiely
(sean....@sybase.com) of Sybase Technical Support, that works if the database
has any "data only" segments. Obviously this method does not apply to the
master database. The Sybase Trouble Shooting Guide has very good coverage of
recovering the master database.

1. You will have to bring the server up with trace flag 3608 to prevent the
recovery of the user databases.
2. sp_configure "allow updates",1
go
3. Write down the segmap entries from the sysusages table for the toasted
database.
4. update sysusages
set segmap = 7
where dbid = db_id("my_toasted_db")
and segmap = 3
5. select status - 320
from sysdatabases
where dbid = db_id("my_toasted_db") -- save this value.
go
begin transaction
update sysdatabases set status = -32768 where dbid = db_id("my_toasted_db")
go -- if all is OK, then...
commit transaction
go
shutdown
go
6. Restart the server without the trace flag. With luck it should now have
enough space to recover. If it doesn't, you are in deeper trouble than
before, you do have a good, recent backup don't you?
7. dump database my_toasted_db with truncate_only
go
8. Reset the segmap entries in sysusages to be those as saved in 3. above.
9. Shutdown ASE and restart. (The traceflag should have gone at step 6., but
ensure that it is not there!)

Return to top

-------------------------------------------------------------------------------

1.4.3: Why do my page locks not get escalated to a table lock after 200 locks?

-------------------------------------------------------------------------------

Several reasons why this may be happening.

* Are you doing the updates from within a cursor?

The lock promotion only happens if you are attempting to take out 200 locks
in a single operation ie a single insert, update or delete. If you
continually loop over a table using a cursor, locking one row at time, the
lock promotion never fires. Either use an explicit mechanism to lock the
whole table, if that is required, or remove the cursor replacing it with an
appropriate join.

* A single operation is failing to escalate?

Even if you are performing a single insert, update or delete, Sybase only
attempts to lock the whole table when the lock escalation point is
reached. If this attempt fails because there is another lock which
prevents the escalation, the attempt is aborted and individual page locking
continues.

Return to top

-------------------------------------------------------------------------------

Performance and Tuning Advanced Administration ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:04 AM4/20/04
to
Archive-name: databases/sybase-faq/part5

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

User Database Administration

1.2.1 Changing varchar(m) to varchar(n)
1.2.2 Frequently asked questions on Table partitioning
1.2.3 How do I manually drop a table?
1.2.4 Why not create all my columns varchar(255)?
1.2.5 What's a good example of a transaction?
1.2.6 What's a natural key?
1.2.7 Making a Stored Procedure invisible
1.2.8 Saving space when inserting rows monotonically
1.2.9 How to compute database fragmentation
1.2.10 Tasks a DBA should do...
1.2.11 How to implement database security
1.2.12 How to shrink a database
1.2.13 How do I turn on auditing of all SQL text sent to the server
1.2.14 sp_helpdb/sp_helpsegment is returning negative numbers

Advanced Administration Basic Administration ASE FAQ

-------------------------------------------------------------------------------

1.2.1: Changing varchar(m) to varchar(n)

-------------------------------------------------------------------------------

Before you start:

select max(datalength(column_name))
from affected_table

In other words, please be sure you're going into this with your head on
straight.

How To Change System Catalogs

This information is Critical To The Defense Of The Free World, and you would be
Well Advised To Do It Exactly As Specified:

use master
go


sp_configure "allow updates", 1

go
reconfigure with override /* System 10 and below */
go
use victim_database
go
select name, colid
from syscolumns
where id = object_id("affected_table")
go
begin tran
go
update syscolumns
set length = new_value
where id = object_id("affected_table")
and colid = value_from_above
go
update sysindexes
set maxlen = maxlen + increase/decrease?
where id=object_id("affected_table")
and indid = 0
go
/* check results... cool? Continue... else rollback tran */
commit tran
go
use master
go


sp_configure "allow updates", 0

go
reconfigure /* System 10 and below */
go

Return to top

-------------------------------------------------------------------------------

1.2.2: FAQ on partitioning

-------------------------------------------------------------------------------

Index of Sections

* What Is Table Partitioning?
+ Page Contention for Inserts
+ I/O Contention
+ Caveats Regarding I/O Contention
* Can I Partition Any Table?
+ How Do I Choose Which Tables To Partition?
* Does Table Partitioning Require User-Defined Segments?
* Can I Run Any Transact-SQL Command on a Partitioned Table?
* How Does Partition Assignment Relate to Transactions?
* Can Two Tasks Be Assigned to the Same Partition?
* Must I Use Multiple Devices to Take Advantage of Partitions?
* How Do I Create A Partitioned Table That Spans Multiple Devices?
* How Do I Take Advantage of Table Partitioning with bcp in?
* Getting More Information on Table Partitioning

What Is Table Partitioning?

Table partitioning is a procedure that creates multiple page chains for a
single table.

The primary purpose of table partitioning is to improve the performance of
concurrent inserts to a table by reducing contention for the last page of a
page chain.

Partitioning can also potentially improve performance by making it possible to
distribute a table's I/O over multiple database devices.

Page Contention for Inserts

By default, ASE stores a table's data in one double-linked set of pages called
a page chain. If the table does not have a clustered index, ASE makes all
inserts to the table in the last page of the page chain.

When a transaction inserts a row into a table, ASE holds an exclusive page lock
on the last page while it inserts the row. If the current last page becomes
full, ASE allocates and links a new last page.

As multiple transactions attempt to insert data into the table at the same
time, performance problems can occur. Only one transaction at a time can obtain
an exclusive lock on the last page, so other concurrent insert transactions
block each other.

Partitioning a table creates multiple page chains (partitions) for the table
and, therefore, multiple last pages for insert operations. A partitioned table
has as many page chains and last pages as it has partitions.

I/O Contention

Partitioning a table can improve I/O contention when ASE writes information in
the cache to disk. If a table's segment spans several physical disks, ASE
distributes the table's partitions across fragments on those disks when you
create the partitions.

A fragment is a piece of disk on which a particular database is assigned space.
Multiple fragments can sit on one disk or be spread across multiple disks.

When ASE flushes pages to disk and your fragments are spread across different
disks, I/Os assigned to different physical disks can occur in parallel.

To improve I/O performance for partitioned tables, you must ensure that the
segment containing the partitioned table is composed of fragments spread across
multiple physical devices.

Caveats Regarding I/O Contention

Be aware that when you use partitioning to balance I/O you run the risk of
disrupting load balancing even as you are trying to achieve it. The following
scenarios can keep you from gaining the load balancing benefits you want:

* You are partitioning an existing table. The existing data could be sitting
on any fragment. Because partitions are randomly assigned, you run the risk
of filling up a fragment. The partition will then steal space from other
fragments, thereby disrupting load balancing.
* Your fragments differ in size.
* The segment maps are configured such that other objects are using the
fragments to which the partitions are assigned.
* A very large bcp job inserts many rows within a single transaction. Because
a partition is assigned for the lifetime of a transaction, a huge amount of
data could go to one particular partition, thus filling up the fragment to
which that partition is assigned.

Can I Partition Any Table?

No. You cannot partition the following kinds of tables:

1. Tables with clustered indexes (as of release 11.5 it is possible to have a
clustered index on a partitioned table)
2. ASE system tables
3. Work tables
4. Temporary tables
5. Tables that are already partitioned. However, you can unpartition and then
re-partition tables to change the number of partitions.

How Do I Choose Which Tables To Partition?

You should partition heap tables that have large amounts of concurrent insert
activity. (A heap table is a table with no clustered index.) Here are some
examples:

1. An "append-only" table to which every transaction must write
2. Tables that provide a history or audit list of activities
3. A new table into which you load data with bcp in. Once the data is loaded
in, you can unpartition the table. This enables you to create a clustered
index on the table, or issue other commands not permitted on a partition
table.

Does Table Partitioning Require User-Defined Segments?

No. By design, each table is intrinsically assigned to one segment, called the
default segment. When a table is partitioned, any partitions on that table are
distributed among the devices assigned to the default segment.

In the example under "How Do I Create A Partitioned Table That Spans Multiple
Devices?", the table sits on a user-defined segment that spans three devices.

Can I Run Any Transact-SQL Command on a Partitioned Table?

No. Once you have partitioned a table, you cannot use any of the following
Transact-SQL commands on the table until you unpartition it:

1. drop table
2. sp_placeobject
3. truncate table
4. alter table table_name partition n

On releases of ASE prior to 11.5 it was not possible to create a clustered
index on a partitioned table either.

How Does Partition Assignment Relate to Transactions?

A user is assigned to a partition for the duration of a transaction. Assignment
of partitions resumes with the first insert in a new transaction. The user
holds the lock, and therefore partition, until the transaction ends.

For this reason, if you are inserting a great deal of data, you should batch it
into separate jobs, each within its own transaction. See "How Do I Take
Advantage of Table Partitioning with bcp in?", for details.

Can Two Tasks Be Assigned to the Same Partition?

Yes. ASE randomly assigns partitions. This means there is always a chance that
two users will vie for the same partition when attempting to insert and one
would lock the other out.

The more partitions a table has, the lower the probability of users trying to
write to the same partition at the same time.

Must I Use Multiple Devices to Take Advantage of Partitions?

It depends on which type of performance improvement you want.

Table partitioning improves performance in two ways: primarily, by decreasing
page contention for inserts and, secondarily, by decreasing i/o contention.
"What Is Table Partitioning?" explains each in detail.

If you want to decrease page contention you do not need multiple devices. If
you want to decrease i/o contention, you must use multiple devices.

How Do I Create A Partitioned Table That Spans Multiple Devices?

Creating a partitioned table that spans multiple devices is a multi-step
procedure. In this example, we assume the following:

* We want to create a new segment rather than using the default segment.
* We want to spread the partitioned table across three devices, data_dev1,
data_dev2, and data_dev3.

Here are the steps:

1. Define a segment:


sp_addsegment newsegment, my_database,data_dev1

2. Extend the segment across all three devices:


sp_extendsegment newsegment, my_database, data_dev2
sp_extendsegment newsegment, my_database, data_dev3

3. Create the table on the segment:


create table my_table
(names, varchar(80) not null)
on newsegment

4. Partition the table:


alter table my_table partition 30

How Do I Take Advantage of Table Partitioning with bcp in?

You can take advantage of table partitioning with bcp in by following these
guidelines:

1. Break up the data file into multiple files and simultaneously run each of
these files as a separate bcp job against one table.

Running simultaneous jobs increases throughput.

2. Choose a number of partitions greater than the number of bcp jobs.

Having more partitions than processes (jobs) decreases the probability of
page lock contention.

3. Use the batch option of bcp in. For example, after every 100 rows, force a
commit. Here is the syntax of this command:


bcp table_name in filename -b100

Each time a transaction commits, ASE randomly assigns a new partition for
the next insert. This, in turn, reduces the probability of page lock
contention.

Getting More Information on Table Partitioning

For more information on table partitioning, see the chapter on controlling
physical data placement in the ASE Performance and Tuning Guide.

Return to top

-------------------------------------------------------------------------------

1.2.3: How to manually drop a table

-------------------------------------------------------------------------------

Occasionally you may find that after issuing a drop table command that the ASE
crashed and consequently the table didn't drop entirely. Sure you can't see it
but that sucker is still floating around somewhere.

Here's a list of instructions to follow when trying to drop a corrupt table:

1. sp_configure allow, 1
go
reconfigure with override
go

2. Write db_id down.
use db_name
go
select db_id()
go
3. Write down the id of the bad_table:
select id
from sysobjects
where name = bad_table_name
go
4. You will need these index IDs to run dbcc extentzap. Also, remember that if
the table has a clustered index you will need to run extentzap on index
"0", even though there is no sysindexes entry for that indid.
select indid
from sysindexes
where id = table_id
go
5. This is not required but a good idea:
begin transaction
go
6. Type in this short script, this gets rid of all system catalog information
for the object, including any object and procedure dependencies that may be
present.

Some of the entries are unnecessary but better safe than sorry.

declare @obj int
select @obj = id from sysobjects where name =
delete syscolumns where id = @obj
delete sysindexes where id = @obj
delete sysobjects where id = @obj
delete sysprocedures where id in
(select id from sysdepends where depid = @obj)
delete sysdepends where depid = @obj
delete syskeys where id = @obj
delete syskeys where depid = @obj
delete sysprotects where id = @obj
delete sysconstraints where tableid = @obj
delete sysreferences where tableid = @obj
delete sysdepends where id = @obj
go
7. Just do it!
commit transaction
go
8. Gather information to run dbcc extentzap:
use master
go
sp_dboption db_name, read, true
go
use db_name
go
checkpoint
go
9. Run dbcc extentzap once for each index (including index 0, the data level)
that you got from above:
use master
go
dbcc traceon (3604)
go
dbcc extentzap (db_id, obj_id, indx_id, 0)
go
dbcc extentzap (db_id, obj_id, indx_id, 1)
go


Notice that extentzap runs twice for each index. This is because the
last parameter (the sort bit) might be 0 or 1 for each index, and you
want to be absolutely sure you clean them all out.

10. Clean up after yourself.
sp_dboption db_name, read, false
go
use db_name
go
checkpoint
go
sp_configure allow, 0
go
reconfigure with override
go

Return to top

-------------------------------------------------------------------------------

1.2.4: Why not max out all my columns?

-------------------------------------------------------------------------------

People occasionally ask the following valid question:

Suppose I have varying lengths of character strings none of which should
exceed 50 characters.

Is there any advantage of last_name varchar(50) over this last_name varchar
(255)?

That is, for simplicity, can I just define all my varying strings to be
varchar(255) without even thinking about how long they may actually be? Is
there any storage or performance penalty for this.

There is no performance penalty by doing this but as another netter pointed
out:

If you want to define indexes on these fields, then you should specify the
smallest size because the sum of the maximal lengths of the fields in the
index can't be greater than 256 bytes.

and someone else wrote in saying:

Your data structures should match the business requirements. This way the
data structure themselves becomes a data dictionary for others to model
their applications (report generation and the like).

Return to top

-------------------------------------------------------------------------------

1.2.5: What's a good example of a transaction?

-------------------------------------------------------------------------------


This answer is geared for Online Transaction Processing (OTLP)
applications.

To gain maximum throughput all your transactions should be in stored procedures
- see Q1.5.8. The transactions within each stored procedure should be short and
simple. All validation should be done outside of the transaction and only the
modification to the database should be done within the transaction. Also, don't
forget to name the transaction for sp_whodo - see Q9.2.

The following is an example of a good transaction:

/* perform validation */
select ...
if ... /* error */
/* give error message */
else /* proceed */
begin
begin transaction acct_addition
update ...
insert ...
commit transaction acct_addition
end

The following is an example of a bad transaction:

begin transaction poor_us
update X ...
select ...
if ... /* error */
/* give error message */
else /* proceed */
begin
update ...
insert ...
end
commit transaction poor_us

This is bad because:

* the first update on table X is held throughout the transaction. The idea
with OLTP is to get in and out fast.
* If an error message is presented to the end user and we await their
response, we'll maintain the lock on table X until the user presses return.
If the user is out in the can we can wait for hours.

Return to top

-------------------------------------------------------------------------------

1.2.6: What's a natural key?

-------------------------------------------------------------------------------

Let me think back to my database class... okay, I can't think that far so I'll
paraphrase... essentially, a natural key is a key for a given table that
uniquely identifies the row. It's natural in the sense that it follows the
business or real world need.

For example, assume that social security numbers are unique (I believe it is
strived to be unique but it's not always the case), then if you had the
following employee table:

employee:

ssn char(09)
f_name char(20)
l_name char(20)
title char(03)

Then a natural key would be ssn. If the combination of _name and l_name were
unique at this company, then another natural key would be f_name, l_name. As a
matter of fact, you can have many natural keys in a given table but in practice
what one does is build a surrogate (or artificial) key.

The surrogate key is guaranteed to be unique because (wait, get back, here it
goes again) it's typically a monotonically increasing value. Okay, my
mathematician wife would be proud of me... really all it means is that the key
is increasing linearly: i+1

The reason one uses a surrogate key is because your joins will be faster.

If we extended our employee table to have a surrogate key:

employee:

id identity
ssn char(09)
f_name char(20)
l_name char(20)
title char(03)

Then instead of doing the following:

where a.f_name = b.f_name
and a.l_name = a.l_name

we'd do this:

where a.id = b.id

We can build indexes on these keys and since Sybase's atomic storage unit is
2K, we can stash more values per 2K page with smaller indexes thus giving us
better performance (imagine the key being 40 bytes versus being say 4 bytes...
how many 40 byte values can you stash in a 2K page versus a 4 byte value? --
and how much wood could a wood chuck chuck, if a wood chuck could chuck wood?)

Does it have anything to do with natural joins?

Um, not really... from "A Guide to Sybase..", McGovern and Date, p. 112:

The equi-join by definition must produce a result containing two identical
columns. If one of those two columns is eliminated, what is left is called
the natural join.

Return to top

-------------------------------------------------------------------------------

1.2.7: Making a Stored Procedure invisible

-------------------------------------------------------------------------------

System 11.5 and above

It is now possible to encrypt your stored procedure code that is stored in the
syscomments table. This is preferred than the old method of deleting the data
as deleting will impact future upgrades. You can encrypt the text with the
sp_hidetext system procedure.

Pre-System 11.5

Perhaps you are trying to prevent the buyer of your software from defncopy'ing
all your stored procedures. It is perfectly safe to delete the syscomments
entries of any stored procedures you'd like to protect:

sp_configure "allow updates", 1

go
reconfigure with override /* System 10 and below */
go
use affected_database
go
delete syscomments where id = object_id("procedure_name")
go
use master
go


sp_configure "allow updates", 0

go

I believe in future releases of Sybase we'll be able to see the SQL that is
being executed. I don't know if that would be simply the stored procedure name
or the SQL itself.

Return to top

-------------------------------------------------------------------------------

1.2.8: Saving space when inserting rows monotonically

-------------------------------------------------------------------------------

If the columns that comprise the clustered index are monotonically increasing
(that is, new row key values are greater than those previously inserted) the
following System 11 dbcc tune will not split the page when it's half way full.
Rather it'll let the page fill and then allocate another page:

dbcc tune(ascinserts, 1, "my_table")

By the way, SyBooks is wrong when it states that the above needs to be reset
when ASE is rebooted. This is a permanent setting.

To undo it:

dbcc tune(ascinserts, 0, "my_table")

Return to top

-------------------------------------------------------------------------------

1.2.9: How to compute database fragmentation

-------------------------------------------------------------------------------

Command

dbcc traceon(3604)
go
dbcc tab(production, my_table, 0)
go

Interpretation

A delta of one means the next page is on the same track, two is a short seek,
three is a long seek. You can play with these constants but they aren't that
important.

A table I thought was unfragmented had L1 = 1.2 L2 = 1.8

A table I thought was fragmented had L1 = 2.4 L2 = 6.6

How to Fix

You fix a fragmented table with clustered index by dropping and creating the
index. This measurement isn't the correct one for tables without clustered
indexes. If your table doesn't have a clustered index, create a dummy one and
drop it.

Return to top

-------------------------------------------------------------------------------

1.2.10: Tasks a DBA should do...

-------------------------------------------------------------------------------

A good presentation of a DBA's duties has been made available by Jeff Garbus (
je...@soaringeagleltd.com) of Soaring Eagle Consulting Ltd (http://
www.soaringeagleltd.com) and numerous books can be found here. These are
Powerpoint slides converted to web pages and so may be difficult to view with a
text browser!

An alternative view is catalogued below. (OK, so this list is crying out for a
bit of a revamp since checkstorage came along Ed!)

DBA Tasks
+-------------------------------------------------------------------------+
| Task | Reason | Period |
|------------------------+---------------+--------------------------------|
| | I consider | If your ASE permits, daily |
| | these the | before your database dumps. If |
| dbcc checkdb, | minimal | this is not possible due to |
| checkcatalog, | dbcc's to | the size of your databases, |
| checkalloc | ensure the | then try the different options |
| | integrity of | so that the end of, say, a |
| | your database | week, you've run them all. |
|------------------------+---------------+--------------------------------|
| Disaster recovery | Always be | |
| scripts - scripts to | prepared for | |
| rebuild your ASE in | the worst. | |
| case of hardware | Make sure to | |
| failure | test them. | |
|------------------------+---------------+--------------------------------|
| scripts to logically | | |
| dump your master | You can | |
| database, that is bcp | selectively | |
| the critical system | rebuild your | |
| tables: sysdatabases, | database in | Daily |
| sysdevices, syslogins, | case of | |
| sysservers, sysusers, | hardware | |
| syssegments, | failure | |
| sysremotelogins | | |
|------------------------+---------------+--------------------------------|
| | A system | |
| | upgrade is | After any change as well as |
| %ls -la <disk_devices> | known to | daily |
| | change the | |
| | permissions. | |
|------------------------+---------------+--------------------------------|
| dump the user | CYA* | Daily |
| databases | | |
|------------------------+---------------+--------------------------------|
| dump the transaction | CYA | Daily |
| logs | | |
|------------------------+---------------+--------------------------------|
| dump the master | CYA | After any change as well as |
| database | | daily |
|------------------------+---------------+--------------------------------|
| | This is the | |
| System 11 and beyond - | configuration | |
| save the $DSQUERY.cfg | that you've | After any change as well as |
| to tape | dialed in, | daily |
| | why redo the | |
| | work? | |
|------------------------+---------------+--------------------------------|
| | | Depending on how often your |
| | | major tables change. Some |
| | | tables are pretty much static |
| | | (e.g. lookup tables) so they |
| update statistics on | To ensure the | don't need an update |
| frequently changed | performance | statistics, other tables |
| tables and | of your ASE | suffer severe trauma (e.g. |
| sp_recompile | | massive updates/deletes/ |
| | | inserts) so an update stats |
| | | needs to be run either nightly |
| | | /weekly/monthly. This should |
| | | be done using cronjobs. |
|------------------------+---------------+--------------------------------|
| create a dummy ASE and | | |
| do bad things to it: | See disaster | When time permits |
| delete devices, | recovery! | |
| destroy permissions... | | |
|------------------------+---------------+--------------------------------|
| Talk to the | It's better | |
| application | to work with | As time permits. |
| developers. | them than | |
| | against them. | |
|------------------------+---------------+--------------------------------|
| Learn new tools | So you can | As time permits. |
| | sleep! | |
|------------------------+---------------+--------------------------------|
| Read | Passes the | Priority One! |
| comp.databases.sybase | time. | |
+-------------------------------------------------------------------------+

* Cover Your Ass

Return to top

-------------------------------------------------------------------------------

1.2.11: How to implement database security

-------------------------------------------------------------------------------

This is a brief run-down of the features and ideas you can use to implement
database security:

Logins, Roles, Users, Aliases and Groups

* sp_addlogin - Creating a login adds a basic authorisation for an account -
a username and password - to connect to the server. By default, no access
is granted to any individual databases.
* sp_adduser - A user is the addition of an account to a specific database.
* sp_addalias - An alias is a method of allowing an account to use a specific
database by impersonating an existing database user or owner.
* sp_addgroup - Groups are collections of users at the database level. Users
can be added to groups via the sp_adduser command.

A user can belong to only one group - a serious limitation that Sybase
might be addressing soon according to the ISUG enhancements requests.
Permissions on objects can be granted or revoked to or from users or
groups.

* sp_role - A role is a high-level Sybase authorisation to act in a specific
capacity for administration purposes. Refer to the Sybase documentation for
details.

Recommendations

Make sure there is a unique login account for each physical person and/or
process that uses the server. Creating generic logins used by many people or
processes is a bad idea - there is a loss of accountability and it makes it
difficult to track which particular person is causing server problems when
looking at the output of sp_who. Note that the output of sp_who gives a
hostname - properly coded applications will set this value to something
meaningful (ie. the machine name the client application is running from) so you
can see where users are running their programs. Note also that if you look at
master..sysprocesses rather than just sp_who, there is also a program_name.
Again, properly coded applications will set this (eg. to 'isql') so you can see
which application is running. If you're coding your own client applications,
make sure you set hostname and program_name via the appropriate Open Client
calls. One imaginative use I've seen of the program_name setting is to
incorporate the connection time into the name, eg APPNAME-DDHHMM (you have 16
characters to play with), as there's no method of determining this otherwise.

Set up groups, and add your users to them. It is much easier to manage an
object permissions system in this way. If all your permissions are set to
groups, then adding a user to the group ensures that users automatically
inherit the correct permissions - administration is *much* simpler.

Objects and Permissions

Access to database objects is defined by granting and/or revoking various
access rights to and from users or groups. Refer to the Sybase documentation
for details.

Recommendations

The ideal setup has all database objects being owned by the dbo, meaning no
ordinary users have any default access at all. Specific permissions users
require to access the database are granted explicitly. As mentioned above - set
permissions for objects to a group and add users to that group. Any new user
added to the database via the group then automatically obtains the correct set
of permissions.

Preferably, no access is granted at all to data tables, and all read and write
activity is accomplished through stored procedures that users have execute
permission on. The benefit of this from a security point of view is that access
can be rigidly controlled with reference to the data being manipulated, user
clearance levels, time of day, and anything else that can be programmed via
T-SQL. The other benefits of using stored procedures are well known (see Q1.5.8
). Obviously whether you can implement this depends on the nature of your
application, but the vast majority of in-house-developed applications can rely
solely on stored procedures to carry out all the work necessary. The only
server-side restriction on this method is the current inability of stored
procedures to adequately handle text and image datatypes (see Q1.5.12). To get
around this views can be created that expose only the necessary columns to
direct read or write access.

Views

Views can be a useful general security feature. Where stored procedures are
inappropriate views can be used to control access to tables to a lesser extent.
They also have a role in defining row-level security - eg. the underlying table
can have a security status column joined to a user authorisation level table in
the view so that users can only see data they are cleared for. Obviously they
can also be used to implement column-level security by screening out sensitive
columns from a table.

Triggers

Triggers can be used to implement further levels of security - they could be
viewed as a last line of defence in being able to rollback unauthorised write
activity (they cannot be used to implement any read security). However, there
is a strong argument that triggers should be restricted to doing what they were
designed for - implementing referential integrity - rather being loaded up with
application logic.

Administrative Roles

With Sybase version 10 came the ability to grant certain administrative roles
to user accounts. Accounts can have sa-level privilege, or be restricted to
security or operator roles - see sp_role.

Recommendations

The use of any generic account is not a good idea. If more than one person
requires access as sa to a server, then it is more accountable and traceable if
they each have an individual account with sa_role granted.

Return to top

-------------------------------------------------------------------------------

1.2.12: How to Shrink a Database

-------------------------------------------------------------------------------


Warning: This document has not been reviewed. Treat it as alpha-test
quality information and report any problems and suggestions to
br...@sybase.com

It has historically been difficult to shrink any database except tempdb
(because it is created fresh every boot time). The two methods commonly used
have been:

1. Ensure that you have scripts for all your objects (some tools like SA
Companion, DB Artisan or dbschema.pl from Sybperl can create scripts from
an existing database), then bcp out your data, drop the database, recreate
it smaller, run your scripts, and bcp in your data.
2. Use a third-party tool such as DataTool's SQL Backtrack, which in essence
automates the first process.

This technote outlines a third possibility that can work in most cases.

An Unsupported Method to Shrink a Database

This process is fairly trivial in some cases, such as removing a recently added
fragment or trimming a database that has a log fragment as its final
allocation, but can also be much more complicated or time consuming than the
script and bcp method.

General Outline

The general outline of how to do it is:

1. Make a backup of the current database
2. Migrate data from sysusages fragments with high lstart values to fragments
with low lstart values.
3. Edit sysusages to remove high lstart fragments that no longer have data
allocations.
4. Reboot ASE.

Details

1. Dump your database. If anything goes wrong, you will need to recover from
this backup!
2. Decide how many megabytes of space you wish to remove from your database.
3. Examine sysusages for the database. You will be shrinking the database by
removing the fragments with the highest lstart values. If the current
fragments are not of appropriate sizes, you may need to drop the database,
recreate it so there are more fragments, and reload the dump.


A trivial case: An example of a time when you can easily shrink a
database is if you have just altered it and are sure there has been no
activity on the new fragment. In this case, you can directly delete the
last row in sysusages for the db (this row was just added by alter db)
and reboot the server and it should come up cleanly.

4. Change the segmaps of the fragments you plan to remove to 0. This will
prevent future data allocations to these fragments.


Note: If any of the fragments you are using have user defined segments
on them, drop those segments before doing this.



sp_configure "allow updates", 1

go
reconfigure with override -- not necessary in System 11
go
update sysusages
set segmap = 0
where dbid = <dbid>
and lstart = <lstart>
go
dbcc dbrepair(<dbname>, remap)
go

Ensure that there is at least one data (segmap 3) and one log (segmap 4)
fragment, or one mixed (segmap 7) fragment.

If the server has been in use for some time, you can shrink it by deleting
rows from sysusages for the db, last rows first, after making sure that no
objects have any allocations on the usages.

5. Determine which objects are on the fragments you plan to remove.
traceon(3604)
go
dbcc usedextents( dbid,0,0,1)
go

Find the extent with the same value as the lstart of the first fragment you
plan to drop. You need to migrate every object appearing from this point on
in the output.

6. Migrate these objects onto earlier fragments in the database.

Objids other than 0 or 99 are objects that you must migrate or drop. You
can migrate a user table by building a new clustered index on the table
(since the segmap was changed, the new allocations will not go on this
fragment).

You can migrate some system tables (but not all) using the sp_fixindex
command to rebuild its clustered index. However, there are a few system
tables that cannot have their clustered indexes rebuilt, and if they have
any allocations on the usage, you are out of luck.

If the objid is 8, then it is the log. You can migrate the log by ensuring
that another usage has a log segment (segmap 4 or 7). Do enough activity on
the database to fill an extents worth of log pages, then checkpoint and
dump tran.

Once you have moved all the objects, delete the row from sysusages and
reboot the server.

Run dbcc checkdb and dbcc checkalloc on the database to be sure you are ok,
then dump the database again.

Return to top

-------------------------------------------------------------------------------

1.2.13: How do I audit the SQL sent to the server?

-------------------------------------------------------------------------------

This does not seem to be well documented, so here is a quick means of auditing
the SQL text that is sent to the server. Note that this simply audits the SQL
sent to the server. So, if your user process executes a big stored procedure,
all you will see here is a call to the stored procedure. None of the SQL that
is executed as part of the stored procedure will be listed.

Firstly, you need to have installed Sybase security (which involves installing
the sybsecurity database and loading it using the script $SYBASE/scripts/
installsecurity). Read the Sybase Security Administration Manual, you may
want to enable a threshold procedure to toggle between a couple of audit
tables. Be warned, that the default configuration option "suspend auditing
when device full" is set to 1. This means that the server will suspend all
normal SQL operations if the audit database becomes full and the sso logs in
and gets rid of some data. You might want to consider changing this to 0
unless yours is a particularly sensitive installation.

Once that is done, you need to enable auditing. If you haven't already, you
will need to restart ASE in order to start the audit subsystem. Then comes the
bit that does not seem well documented, you need to select an appropriate audit
option, and the one for the SQL text is "cmdtext". From the sybsecurity
database, issue

sp_audit "cmdtext",<username>,"all","on"

for each user on the system that wish to collect the SQL for. sp_audit seems
to imply that you can replace "<username>" with all, but I get the error
message "'all' is not a valid user name". Finally, enable auditing for the
system as a whole using

sp_configure "auditing",1
go

If someone knows where in the manuals this is well documented, I will add a
link/reference.

Note: The stored procedure sp_audit had a different name under previous
releases. I think that it was called sp_auditoption. Also, to get a full list
of the options and their names, go into sybsecurity and simply run sp_audit
with no arguments.

Return to top

-------------------------------------------------------------------------------

1.2.14: sp_helpdb/sp_helpsegment is returning negative numbers

-------------------------------------------------------------------------------

A number of releases of ASE return negative numbers for sp_helpdb. One solution
given by Sybase is to restart the server. Hmm... not always possible. An
alternative is to use the dbcc command 'usedextents'. Issue the following:

dbcc traceon(3604)
dbcc usedextents(, 0, 1, 1)

and the problem should disappear. This is actually a solved case, Sybase solved
case no: 10454336, go to http://info.sybase.com/resolution/detail.stm?id_number
=10454336 to see more information.

Return to top

-------------------------------------------------------------------------------

Advanced Administration Basic Administration ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:07 AM4/20/04
to
Archive-name: databases/sybase-faq/part10

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

1.5.9: You and showplan output

-------------------------------------------------------------------------------

As recently pointed out in the Sybase-L list, the showplan information that was
here is terribly out of date. It was written back when the output from ASE and
MS SQL Server were identical. (To see just how differenet they have become,
have a look at the O'Reilly book "Transact-SQL Programming". It does a line for
line comparison.) The write up in the Performance and Tuning Guide is
excellent, and this section was doing nothing but causing problems.

If you do have a need for the original document, then it can be found here, but
it will no longer be considered part of the official FAQ.

Back to top

-------------------------------------------------------------------------------

1.5.10: Poor man's sp_sysmon

-------------------------------------------------------------------------------

This is needed for System 10 and Sybase 4.9.2 where there is no sp_sysmon
command available.

Fine tune the waitfor for your application. You may need TS Role -- see Q3.1.

use master
go
dbcc traceon(3604)
dbcc monitor ("clear", "all", "on")
waitfor delay "00:01:00"
dbcc monitor ("sample", "all", "on")
dbcc monitor ("select", "all", "on")
dbcc traceon(8399)
select field_name, group_name, value
from sysmonitors
dbcc traceoff(8399)
go
dbcc traceoff(3604)
go

Back to top

-------------------------------------------------------------------------------

1.5.11: View MRU-LRU procedure cache chain

-------------------------------------------------------------------------------

dbcc procbuf gives a listing of the current contents of the procedure cache. By
repeating the process at intervals it is possible to watch procedures moving
down the MRU-LRU chain, and so to see how long procedures remain in cache. The
neat thing about this approach is that you can size your cache according to
what is actually happening, rather than relying on estimates based on
assumptions that may not hold on your site.

To run it:

dbcc traceon(3604)
go
dbcc procbuf
go

If you use sqsh it's a bit easier to grok the output:

dbcc traceon(3604);
dbcc procbuf;|fgrep <pbname>

See Q1.5.7 regarding procedure cache sizing.

Back to top

-------------------------------------------------------------------------------

1.5.12: Improving Text/Image Type Performance

-------------------------------------------------------------------------------

If you know that you are going to be using a text/insert column immediately,
insert the row setting the column to a non-null value.

There's a noticeable performance gain.

Unfortunately, text and image datatypes cannot be passed as parameters to
stored procedures. The address of the text or image location must be created
and returned where it is then manipulated by the calling code. This means that
transactions involving both text and image fields and stored procedures are not
atomic. However, the datatypes can still be declared as not null in the table
definition.

Given this example -

create table key_n_text
(
key int not null,
notes text not null
)

This stored procedure can be used -

create procedure sp_insert_key_n_text
@key int,
@textptr varbinary(16) output
as

/*
** Generate a valid text pointer for WRITETEXT by inserting an
** empty string in the text field.
*/
insert key_n_text
(
key,
notes
)
values
(
@key,
""
)

select @textptr = textptr(notes)
from key_n_text
where key = @key

return 0
go

The return parameter is then used by the calling code to update the text field,
via the dbwritetext() function if using DB-Library for example.

Back to top

-------------------------------------------------------------------------------

Server Monitoring General Troubleshooting ASE FAQ

Server Monitoring

1.6.1 What is Monitor Server and how do I configure it?
1.6.2 OK, that was easy, how do I configure a client?

Platform Specific Issues - Solaris Performance and Tuning ASE FAQ

-------------------------------------------------------------------------------

1.6.1: How do I configure Monitor Server?

-------------------------------------------------------------------------------

Monitor Server is a separate server from the normal dataserver. Its purpose, as
the name suggests, is to monitor ASE. It uses internal counters to determine
what is happening. On its own, it does not actually do a lot. You need to hook
up a client of some sort in order to be able to view the results.

Configuration is easy. The Sybase documentation is very good on this one for
either Unix or NT. Rather than repeat myself, go to the Sybase web site and
check out the Monitor Server User's Guide. Obviously the link should take you
to the HTML edition of the book. There is also a PDF available. Look for
"monbook.pdf". If Sybase has skipped to ASE 99.9 and this link no longer works,
then you will have to go search the Sybase home pages.

Back to top

-------------------------------------------------------------------------------

1.6.2: OK, that was easy, how do I configure a client?

-------------------------------------------------------------------------------

I see that you like a challenge! Syase offer a Java client to view the output
from Monitor Server. It is accessible either standalone or via the Win32
edition of Sybase Central.

Standalone on NT/2000

I could not find anything about setting up the clients in the standard
documentation set. However, there is a small paper on it here (towards the
bottom). It does miss out a couple of important details, but is helpful for all
that.

I did not try too hard to get the 11.9.2 version running, since the 12.5
version will monitor 11.9 servers.

I do not have a boxed release of ASE 12.5 for NT, just the developers release.
This does not come with all of the necessary files. In order to run the Monitor
Client, you will need the PC Client CD that came with the boxed release. If all
you have is the developer's edition, you might be stuck. It would be worth
getting in touch with Sybase to see if they could ship you one. There is
probably a charge!

You will need to install the client software. If you have a release of ASE
already installed and running you might want to install this into a separate
area. I am not sure what files it includes and versions etc, but if you have
the space I recommend saving yourself some hassle. If you have an older edition
of ASE installed, the installation will ask if you want to overwrite two files,
mclib.dll and mchelp.dll, both of which should reside in your winnt/system32
directory. It is important that you accept both of the overwrites. The older
versions of these files do not seem to work.

Once installed, you will also need to spend some time playing with environment
variables. I have got 3 editions of ASE all running successfully on the one
machine (see Q1.3.9). I chose to have one user for each ASE instance, each with
their own local environment variables pointing to the relevant installation for
them, plus a generic account for my main user that I configured to use the
software installed from the client CD. I adjusted the variables so that each
user had their own set of variables and all of the installations worked OK.

Next, you need a copy of Java 1.1.8 installed. The client CD has a copy of JDK
1.1.8 in the "ASEP_Win32" directory. This is the one to go for, as I am sure
that it was the one that the Monitor Client was built with. I did try a version
from Sun's Java archive, but it failed.

Next, set up the JAVA_HOME environment variable. If you installed the JDK into
its default location, that will be C:\jdk1.1.8.

Check to ensure that your CLASSPATH is defined as (assuming that you installed
the client into C:\Sybase_Client):

C:\Sybase_Client\ASEP_Win32\monclass.zip;C:\Sybase_Client\ASEP_Win32\3pclass.zip;%JAVA_HOME%\lib\rt.jar

You may want to check that the files mclib.dll and mchelp.dll exist in your
winnt/system32 directory if you were not asked to replace them earlier. You may
also want to check that the defauly Java command is correct with java -version.
It should return

java version "1.1.8"

You should now be able to fire up the main window with:

java sybase.monclt.mcgui.procact.ProcActApp 12.5 sa "sa_password" en 0 sccsen.hlp

(The paper says that you should use "jre" and not "java". That gives me a
cosistent "Class not found...". I do not know why.)

You should be presented with a screen like this, which will fill with process
information after 10 seconds. Choose "File->Monitors >" to choose a monitoring
graph. Here are a couple of screenshots from various monitors:

* Performance Summary
* Performance Trends...
* Process Current SQL Statement
* Network Activity

Obviously, all of this can be set from the command line or via a batch script.
Shove the following into a file called mon.bat and invoke using mon ASE_SERVER
MON_SERVER PASSWORD

SET JAVA_HOME=C:\JDK1.1.8
SET PATH=%JAVA_HOME%\bin;%PATH%
SET CLASSPATH=C:\SYBASE_CLIENT\ASEP_Win32\monclass.zip;C:\SYBASE_CLIENT\ASEP_Win32\3pclass.zip
java sybase.monclt.mcgui.procact.ProcActApp %1 12.5 %2 sa "%3" en 0 scssen.hlp

Obviously, you will need to replace "C:\SYBASE_CLIENT" with the correct string
pointing to your Sybase ASE installation.

Via Sybase Central on NT/2000

You will need to have installed the version of the Java Development Kit that
comes with your CD, as per standalone installation. Next, create a shortcut to
the file %SYBASE%\Sybase Central 3.2\win32\scview.exe. This is the Win 32
version of Sybase Central. Next, edit the shortcut's properties (right click on
the shortcut and select "Properties"). Now, edit the "Start In" field to be "C:
\jdk1.1.8\bin", assuming that you installed the JDK into its default location.

Now, assuming that both the ASE and Monitor servers are running, start up this
version of Sybase Central. Unlike the Java edition, all of the Servers from the
SQL.INI file are displayed at startup. Right click on the ASE server you wish
to monitor and select "Properties". This brings up a triple tabbed screen.
Select the "Monitor Server" tab and use the drop down to select the appropriate
monitor server. Now, connect to the ASE server and you will see another level
in the options tree called "Monitors". Click on it and you should see a
complete list of the monitors you can choose from. Double clicking on one
should display it. The output is exactly the same as for standalone operation.

Back to top

-------------------------------------------------------------------------------

Platform Specific Issues - Solaris Performance and Tuning ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:07 AM4/20/04
to
Archive-name: databases/sybase-faq/part9

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

1.5.7: How much memory to configure?

-------------------------------------------------------------------------------

System 10 and below.

Overview

At some point you'll wonder if your ASE has been configured with sufficient
memory. We hope that it's not during some crisis but that's probably when it'll
happen.

The most important thing in setting up memory for a ASE is that it has to be
large enough to accommodate:

* concurrent user connections
* active procedures
* and concurrent open databases.

By not setting the ASE up correctly it will affect the performance of it. A
delicate balance needs to be struck where your ASE is large enough to
accommodate the users but not too large where it adversely affects the CPU
Server (such as causing swapping).

Assumptions made of the reader:

* The reader has some experience administering ASEs.
* All queries have been tuned and that there are no unnecessary table scans.

Preface

As the ASE starts up, it pre-allocates its structures to support the
configuration. The memory that remains after the pre-allocation phase is the
available cache.

The available cache is partitioned into two pieces:

1. buffer cache - data pages to be sent to a user connection or flushed to
disk.
2. procedure cache - where query plans live.

The idea is to determine if the buffer cache and the procedure cache are of
adequate size. As a DBA you can use dbcc memusage to ascertain this.

The information provided from a dbcc memusage, daunting at first, but taken in
sections, is easy to understand and provides the DBA with the vital information
that is necessary to determine if more memory is required and where it is
required.

If the procedure cache is too small, user connections will get sporadic 701's:

There is insufficient system memory to run this query.

If the buffer cache is too small, response time may be poor or spiky.

The following text describes how to interpret the output of dbcc memusage and
to correlate this back to the fundamental question:

Does my ASE have enough memory?

Definitions

Before delving into the world of dbcc memusage some definitions to get us
through.

Buffer Cache (also referred to as the Data Cache)
Area of memory where ASE stores the most recently used data pages and index
pages in 2K page units. If ASE finds a data page or index page in the
buffer cache, it doesn't need to perform a physical I/O (it is reported as
a logical I/O). If a user connection selects data from a database, the ASE
loads the 2K data page(s) here and then hands the information off to the
user connection. If a user connection updates data, these pages are
altered, and then they are flushed out to disk by the ASE.


This is a bit simplistic but it'll do. Read on for more info though.

The cache is maintained as a doubly linked list. The head of the list
is where the most recently used pages are placed. Naturally towards the
tail of the chain are the least recently used pages. If a page is
requested and it is found on the chain, it is moved back to the front
of the chain and the information is relayed, thus saving a physical I/
O.

But wait! this recycling is not done forever. When a checkpoint occurs
any dirty pages are flushed. Also, the parameter cbufwashsize
determines how many times a page containing data can be recycled before
it has to be flushed out to disk. For OAM and index pages the following
parameters apply coamtrips and cindextrips respectively.

Procedure Cache
Area of memory where ASE stores the most recently used query plans of
stored procedures and triggers. This procedure cache is also used by the
Server when a procedure is being created and when a query is being
compiled. Just like the buffer cache, if SQL Server finds a procedure or a
compilation already in this cache, it doesn't need to read it from the
disk.

The size of procedure cache is determined by the percentage of remaining
memory configured for this Server parameter after ASE memory needs are met.

Available Cache

When the ASE starts up it pre-allocates its data structures to support the
current configuration. For example, based on the number of user connections,
additional netmem, open databases and so forth the dataserver pre-allocates how
much memory it requires to support these configured items.

What remains after the pre-allocation is the available cache. The available
cache is divided into buffer cache and procedure cache. The sp_configure
"procedure cache" parameter determines the percentage breakdown. A value of 20
would read as follows:

20% of the available cache is dedicated to the procedure cache and 80% is
dedicated to the buffer cache.

Your pal: dbcc memusage

dbcc memusage takes a snapshot of your ASE's current memory usage and reports
this vital information back to you. The information returned provides
information regarding the use of your procedure cache and how much of the
buffer cache you are currently using.

An important piece of information is the size of the largest query plan. We'll
talk about that more below.

It is best to run dbcc memusage after your ASE has reached a working set. For
example, at the end of the day or during lunch time.

Running dbcc memusage will freeze the dataserver while it does its work.
The more memory you have configured for the ASE the longer it'll take. Our
experience is that for a ASE with 300MB it'll take about four minutes to
execute. During this time, nothing else will execute: no user queries, no
sp_who's...

In order to run dbcc memusage you must have sa privileges. Here's a sample
execution for discussion purposes:

1> /* send the output to the screen instead of errorlog */
2> dbcc traceon(3604)
3> go
1> dbcc memusage
2> go
Memory Usage:

Meg. 2K Blks Bytes

Configured Memory:300.0000 153600 314572800

Code size: 2.6375 1351 2765600
Kernel Structures: 77.6262 39745 81396975
Server Structures: 54.4032 27855 57045920
Page Cache:129.5992 66355 135894640
Proc Buffers: 1.1571 593 1213340
Proc Headers: 25.0840 12843 26302464

Number of page buffers: 63856
Number of proc buffers: 15964

Buffer Cache, Top 20:

DB Id Object Id Index Id 2K Buffers

6 927446498 0 9424
6 507969006 0 7799
6 959446612 0 7563
6 116351649 0 7428
6 2135014687 5 2972
6 607445358 0 2780
6 507969006 2 2334
6 2135014687 0 2047
6 506589013 0 1766
6 1022066847 0 1160
6 116351649 255 987
6 927446498 8 897
6 927446498 10 733
6 959446612 7 722
6 506589013 1 687
6 971918604 0 686
6 116351649 6 387

Procedure Cache, Top 20:

Database Id: 6
Object Id: 1652357121
Object Name: lp_cm_case_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 16
Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages
----
Database Id: 6
Object Id: 1668357178
Object Name: lp_cm_subcase_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 10
Size of plans: 0.202827 Mb, 212680.000000 bytes, 110 pages
----
Database Id: 6
Object Id: 132351706
Object Name: csp_get_case
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 9
Size of plans: 0.149792 Mb, 157068.000000 bytes, 81 pages
----
Database Id: 6
Object Id: 1858261845
Object Name: lp_get_last_caller_new
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 2
Size of plans: 0.054710 Mb, 57368.000000 bytes, 30 pages
...

1> /* redirect output back to the errorlog */
2> dbcc traceoff(3604)
3> go

Dissecting memusage output

The output may appear overwhelming but it's actually pretty easy to parse.
Let's look at each section.

Memory Usage

This section provides a breakdown of the memory configured for the ASE.

Memory Usage:

Meg. 2K Blks Bytes

Configured Memory:300.0000 153600 314572800

Code size: 2.6375 1351 2765600
Kernel Structures: 77.6262 39745 81396975
Server Structures: 54.4032 27855 57045920
Page Cache:129.5992 66355 135894640
Proc Buffers: 1.1571 593 1213340
Proc Headers: 25.0840 12843 26302464

Number of page buffers: 63856
Number of proc buffers: 15964


The Configured Memory does not equal the sum of the individual components.
It does in the sybooks example but in practice it doesn't always. This is
not critical and it is simply being noted here.

The Kernel Structures and Server structures are of mild interest. They can be
used to cross-check that the pre-allocation is what you believe it to be. The
salient line items are Number of page buffers and Number of proc buffers.

The Number of proc buffers translates directly to the number of 2K pages
available for the procedure cache.

The Number of page buffers is the number of 2K pages available for the buffer
cache.

As a side note and not trying to muddle things, these last two pieces of
information can also be obtained from the errorlog:

... Number of buffers in buffer cache: 63856.
... Number of proc buffers allocated: 15964.

In our example, we have 15,964 2K pages (~32MB) for the procedure cache and
63,856 2K pages (~126MB) for the buffer cache.

Buffer Cache

The buffer cache contains the data pages that the ASE will be either flushing
to disk or transmitting to a user connection.

If this area is too small, the ASE must flush 2K pages sooner than might be
necessary to satisfy a user connection's request.

For example, in most database applications there are small edit tables that are
used frequently by the application. These tables will populate the buffer cache
and normally will remain resident during the entire life of the ASE. This is
good because a user connection may request validation and the ASE will find the
data page(s) resident in memory. If however there is insufficient memory
configured, then these small tables will be flushed out of the buffer cache in
order to satisfy another query. The next time a validation is requested, the
tables will have to be re-read from disk in order to satisfy the request. Your
performance will degrade.

Memory access is easily an order of magnitude faster than performing a physical
I/O.

In this example we know from the previous section that we have 63,856 2K pages
(or buffers) available in the buffer cache. The question to answer is, "do we
have sufficient buffer cache configured?"

The following is the output of the dbcc memusage regarding the buffer cache:

Buffer Cache, Top 20:

DB Id Object Id Index Id 2K Buffers

6 927446498 0 9424
6 507969006 0 7799
6 959446612 0 7563
6 116351649 0 7428
6 2135014687 5 2972
6 607445358 0 2780
6 507969006 2 2334
6 2135014687 0 2047
6 506589013 0 1766
6 1022066847 0 1160
6 116351649 255 987
6 927446498 8 897
6 927446498 10 733
6 959446612 7 722
6 506589013 1 687
6 971918604 0 686
6 116351649 6 387
Index Legend
+-----------------------------+
| | |
|-------+---------------------|
| Value | Definition |
|-------+---------------------|
| 0 | Table data |
|-------+---------------------|
| 1 | Clustered index |
|-------+---------------------|
| 2-250 | Nonclustered |
| | indexes |
|-------+---------------------|
| 255 | Text pages |
+-----------------------------+

* To translate the DB Id use select db_name(#) to map back to the database
name.
* To translate the Object Id, use the respective database and use the select
object_name(#) command.

It's obvious that the first 10 items take up the largest portion of the buffer
cache. Sum these values and compare the result to the amount of buffer cache
configured.

Summing the 10 items nets a result of 45,263 2K data pages. Comparing that to
the number of pages configured, 63,856, we see that this ASE has sufficient
memory configured.

When do I need more Buffer Cache?

I follow the following rules of thumb to determine when I need more buffer
cache:

* If the sum of all the entries reported is equal to the number of pages
configured and all entries are relatively the same size. Crank it up.
* Note the natural groupings that occur in the example. If the difference
between any of the groups is greater than an order of magnitude I'd be
suspicious. But only if the sum of the larger groups is very close to the
number of pages configured.

Procedure Cache

If the procedure cache is not of sufficient size you may get sporadic 701
errors:

There is insufficient system memory to run this query.

In order to calculate the correct procedure cache one needs to apply the
following formula (found in ASE Troubleshooting Guide - Chapter 2, Procedure
Cache Sizing):

proc cache size = max(# of concurrent users) * (size of the largest plan) *
1.25

The flaw with the above formula is that if 10% of the users are
executing the largest plan, then you'll overshoot. If you have distinct
classes of connections whose largest plans are mutually exclusive then
you need to account for that:

ttl proc cache = proc cache size * x% + proc cache size * y% ...

The max(# of concurrent users) is not the number of user connections configured
but rather the actual number of connections during the peak period.

To compute the size of the largest [query] plan take the results from the dbcc
memusage's, Procedure Cache section and apply the following formula:

query plan size = [size of plans in bytes] / [number of plans]

We can compute the size of the query plan for lp_cm_case_list by using the
output of the dbcc memusage:

...
Database Id: 6
Object Id: 1652357121
Object Name: lp_cm_case_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 16
Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages
----
...

Entering the respective numbers, the query plan size for lp_cm_case_list is
21K:

query plan size = 339072 / 16
query plan size = 21192 bytes or 21K

The formula would be applied to all objects found in the procedure cache and
the largest value would be plugged into the procedure cache size formula:

Query Plan Sizes
+--------------------------------+
| | |
|------------------------+-------|
| | Query |
| Object | Plan |
| | Size |
|------------------------+-------|
| lp_cm_case_list | 21K |
|------------------------+-------|
| lp_cm_subcase_list | 21K |
|------------------------+-------|
| csp_get_case | 19K |
|------------------------+-------|
| lp_get_last_caller_new | 28K |
+--------------------------------+

The size of the largest [query] plan is 28K.

Entering these values into the formula:

proc cache size = max(# of concurrent users) * (size of the largest plan) *
1.25
proc cache size = 491 connections * 28K * 1.25
proc cache size = 17,185 2K pages required

Our example ASE has 15,964 2K pages configured but 17,185 2K pages are
required. This ASE can benefit by having more procedure cache configured.

This can be done one of two ways:

1. If you have some headroom in your buffer cache, then sp_configure
"procedure cache" to increase the ratio of procedure cache to buffer cache
or

procedure cache =
[ proposed procedure cache ] /
( [ current procedure cache ] + [ current buffer cache ] )

The new procedure cache would be 22%:

procedure cache = 17,185 / ( 15,964 + 63,856 )
procedure cache = .2152 or 22%

2. If the buffer cache cannot be shrunken, then sp_configure "memory" to
increase the total memory:

mem size =
([ proposed procedure cache ]) /
([ current procedure cache ] / [ current configured memory ])

The new memory size would be 165,399 2K pages, assuming that the
procedure cache is unchanged:

mem size = 17,185 / ( 15,964 / 153,600 )
mem size = 165,399 2K pages

Back to top

-------------------------------------------------------------------------------

1.5.8: Why should I use stored procedures?

-------------------------------------------------------------------------------

There are many advantages to using stored procedures (unfortunately they do not
handle the text/image types):

* Security - you can revoke access to the base tables and only allow users to
access and manipulate the data via the stored procedures.
* Performance - stored procedures are parsed and a query plan is compiled.
This information is stored in the system tables and it only has to be done
once.
* Network - if you have users who are on a WAN (slow connection) having
stored procedures will improve throughput because less bytes need to flow
down the wire from the client to ASE.
* Tuning - if you have all your SQL code housed in the database, then it's
easy to tune the stored procedure without affecting the clients (unless of
course the parameter change).
* Modularity - during application development, the application designer can
concentrate on the front-end and the DB designer can concentrate on the
ASE.
* Network latency - a client on a LAN may seem slower if it is sending large
numbers of separate requests to a database server, bundling them into one
procedure call may improve responsiveness. Also, servers handling large
numbers of small requests can spend a surprising amount of CPU time
performing network IO.
* Minimise blocks and deadlocks - it is a lot easier to handle a deadlock if
the entire transaction is performed in one database request, also locks
will be held for a shorter time, improving concurrency and potentially
reducing the number of deadlocks. Further, it is easier to ensure that all
tables are accessed in a consistent order if code is stored centrally
rather than dispersed among a number of apps.

Back to top

-------------------------------------------------------------------------------

David Owen

unread,
Apr 20, 2004, 9:45:06 AM4/20/04
to
Archive-name: databases/sybase-faq/part8

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Performance and Tuning

1.5.1 What are the nitty gritty details on Performance and Tuning?
1.5.2 What is best way to use temp tables in an OLTP environment?
1.5.3 What's the difference between clustered and non-clustered indexes?
1.5.4 Optimistic versus pessimistic locking?
1.5.5 How do I force an index to be used?
1.5.6 Why place tempdb and log on low numbered devices?
1.5.7 Have I configured enough memory for ASE?
1.5.8 Why should I use stored procedures?
1.5.9 I don't understand showplan's output, please explain.
1.5.10 Poor man's sp_sysmon.
1.5.11 View MRU-LRU procedure cache chain.
1.5.12 Improving Text/Image Type Performance

Server Monitoring General Troubleshooting ASE FAQ

-------------------------------------------------------------------------------

1.5.1: Sybase ASE Performance and Tuning

-------------------------------------------------------------------------------

Before going any further, Eric Miner (eric....@sybase.com) has made available
two presentations that he made at Techwave 1999. The first covers the use of
optdiag. The second covers features in the way the optimiser works in ASE
11.9.2 and 12. These are Powerpoint slides converted to web pages, so they
might be tricky to read with a text based browser!

All Components Affect Response Time & Throughput

We often think that high performance is defined as a fast data server, but the
picture is not that simple. Performance is determined by all these factors:

* The client application itself:
+ How efficiently is it written?
+ We will return to this later, when we look at application tuning.
* The client-side library:
+ What facilities does it make available to the application?
+ How easy are they to use?
* The network:
+ How efficiently is it used by the client/server connection?
* The DBMS:
+ How effectively can it use the hardware?
+ What facilities does it supply to help build efficient fast
applications?
* The size of the database:
+ How long does it take to dump the database?
+ How long to recreate it after a media failure?

Unlike some products which aim at performance on paper, Sybase aims at solving
the multi-dimensional problem of delivering high performance for real
applications.

OBJECTIVES

To gain an overview of important considerations and alternatives for the
design, development, and implementation of high performance systems in the
Sybase client/server environment. The issues we will address are:

* Client Application and API Issues
* Physical Database Design Issues
* Networking Issues
* Operating System Configuration Issues
* Hardware Configuration Issues
* ASE Configuration Issues

Client Application and Physical Database Design design decisions will
account for over 80% of your system's "tuneable" performance so ... plan
your project resources accordingly !

It is highly recommended that every project include individuals who have taken
Sybase Education's Performance and Tuning course. This 5-day course provides
the hands-on experience essential for success.

Client Application Issues

* Tuning Transact-SQL Queries
* Locking and Concurrency
* ANSI Changes Affecting Concurrency
* Application Deadlocking
* Optimizing Cursors in v10
* Special Issues for Batch Applications
* Asynchronous Queries
* Generating Sequential Numbers
* Other Application Issues

Tuning Transact-SQL Queries

* Learn the Strengths and Weaknesses of the Optimizer
* One of the largest factors determining performance is TSQL! Test not only
for efficient plans but also semantic correctness.
* Optimizer will cost every permutation of accesses for queries involving 4
tables or less. Joins of more than 4 tables are "planned" 4-tables at a
time (as listed in the FROM clause) so not all permutations are evaluated.
You can influence the plans for these large joins by the order of tables in
the FROM clause.
* Avoid the following, if possible:
+ What are SARGS?

This is short for search arguments. A search argument is essentially a
constant value such as:
o "My company name"
o 3448

but not:
o 344 + 88
o like "%what you want%"
+ Mathematical Manipulation of SARGs


SELECT name FROM employee WHERE salary * 12 > 100000

+ Use of Incompatible Datatypes Between Column and its SARG


Float & Int, Char & Varchar, Binary & Varbinary are Incompatible;

Int & Intn (allow nulls) OK

+ Use of multiple "OR" Statements - especially on different columns in
same table. If any portion of the OR clause requires a table scan, it
will! OR Strategy requires additional cost of creating and sorting a
work table.
+ Not using the leading portion of the index (unless the query is
completely covered)
+ Substituting "OR" with "IN (value1, value2, ... valueN) Optimizer
automatically converts this to an "OR"
+ Use of Non-Equal Expressions (!=) in WHERE Clause.
* Use Tools to Evaluate and Tune Important/Problem Queries
+ Use the "set showplan on" command to see the plan chosen as "most
efficient" by optimizer. Run all queries through during development and
testing to ensure accurate access model and known performance.
Information comes through the Error Handler of a DB-Library
application.
+ Use the "dbcc traceon(3604, 302, 310)" command to see each alternative
plan evaluated by the optimizer. Generally, this is only necessary to
understand why the optimizer won't give you the plan you want or need
(or think you need)!
+ Use the "set statistics io on" command to see the number of logical and
physical i/o's for a query. Scrutinize those queries with high logical
i/o's.
+ Use the "set statistics time on" command to see the amount of time
(elapsed, execution, parse and compile) a query takes to run.
+ If the optimizer turns out to be a "pessimizer", use the "set forceplan
on" command to change join order to be the order of the tables in the
FROM clause.
+ If the optimizer refuses to select the proper index for a table, you
can force it by adding the index id in parentheses after the table name
in the FROM clause.


SELECT * FROM orders(2), order_detail(1) WHERE ...

This may cause portability issues should index id's vary/change by
site !

Locking and Concurrency

* The Optimizer Decides on Lock Type and Granularity
* Decisions on lock type (share, exclusive, or update) and granularity (page
or table) are made during optimization so make sure your updates and
deletes don't scan the table !
* Exclusive Locks are Only Released Upon Commit or Rollback
* Lock Contention can have a large impact on both throughput and response
time if not considered both in the application and database design !
* Keep transactions as small and short as possible to minimize blocking.
Consider alternatives to "mass" updates and deletes such as a v10.0 cursor
in a stored procedure which frequently commits.
* Never include any "user interaction" in the middle of transactions.
* Shared Locks Generally Released After Page is Read
* Share locks "roll" through result set for concurrency. Only "HOLDLOCK" or
"Isolation Level 3" retain share locks until commit or rollback. Remember
also that HOLDLOCK is for read-consistency. It doesn't block other readers
!
* Use optimistic locking techniques such as timestamps and the tsequal()
function to check for updates to a row since it was read (rather than
holdlock)

ANSI Changes Affecting Concurrency

* Chained Transactions Risk Concurrency if Behavior not Understood
* Sybase defaults each DML statement to its own transaction if not specified
;
* ANSI automatically begins a transaction with any SELECT, FETCH, OPEN,
INSERT, UPDATE, or DELETE statement ;
* If Chained Transaction must be used, extreme care must be taken to ensure
locks aren't left held by applications unaware they are within a
transaction! This is especially crucial if running at Level 3 Isolation
* Lock at the Level of Isolation Required by the Query
* Read Consistency is NOT a requirement of every query.
* Choose level 3 only when the business model requires it
* Running at Level 1 but selectively applying HOLDLOCKs as needed is safest
* If you must run at Level 3, use the NOHOLDLOCK clause when you can !
* Beware of (and test) ANSI-compliant third-party applications for
concurrency

Application Deadlocking

Prior to ASE 10 cursors, many developers simulated cursors by using two or more
connections (dbproc's) and divided the processing between them. Often, this
meant one connection had a SELECT open while "positioned" UPDATEs and DELETEs
were issued on the other connection. The approach inevitably leads to the
following problem:

1. Connection A holds a share lock on page X (remember "Rows Pending" on SQL
Server leave a share lock on the "current" page).
2. Connection B requests an exclusive lock on the same page X and waits...
3. The APPLICATION waits for connection B to succeed before invoking whatever
logic will remove the share lock (perhaps dbnextrow). Of course, that never
happens ...

Since Connection A never requests a lock which Connection B holds, this is NOT
a true server-side deadlock. It's really an "application" deadlock !

Design Alternatives

1. Buffer additional rows in the client that are "nonupdateable". This forces
the shared lock onto a page on which the application will not request an
exclusive lock.
2. Re-code these modules with CT-Library cursors (aka. server-side cursors).
These cursors avoid this problem by disassociating command structures from
connection structures.
3. Re-code these modules with DB-Library cursors (aka. client-side cursors).
These cursors avoid this problem through buffering techniques and
re-issuing of SELECTs. Because of the re-issuing of SELECTs, these cursors
are not recommended for high transaction sites !

Optimizing Cursors with v10.0

* Always Declare Cursor's Intent (i.e. Read Only or Updateable)
* Allows for greater control over concurrency implications
* If not specified, ASE will decide for you and usually choose updateable
* Updateable cursors use UPDATE locks preventing other U or X locks
* Updateable cursors that include indexed columns in the update list may
table scan
* SET Number of Rows for each FETCH
* Allows for greater Network Optimization over ANSI's 1- row fetch
* Rows fetched via Open Client cursors are transparently buffered in the
client:
FETCH -> Open Client <- N rows
Buffers
* Keep Cursor Open on a Commit / Rollback
* ANSI closes cursors with each COMMIT causing either poor throughput (by
making the server re-materialize the result set) or poor concurrency (by
holding locks)
* Open Multiple Cursors on a Single Connection
* Reduces resource consumption on both client and Server
* Eliminates risk of a client-side deadlocks with itself

Special Issues for Batch Applications

ASE was not designed as a batch subsystem! It was designed as an RBDMS for
large multi-user applications. Designers of batch-oriented applications should
consider the following design alternatives to maximize performance :

Design Alternatives :

* Minimize Client/Server Interaction Whenever Possible
* Don't turn ASE into a "file system" by issuing single table / single row
requests when, in actuality, set logic applies.
* Maximize TDS packet size for efficient Interprocess Communication (v10
only)
* New ASE 10.0 cursors declared and processed entirely within stored
procedures and triggers offer significant performance gains in batch
processing.
* Investigate Opportunities to Parallelize Processing
* Breaking up single processes into multiple, concurrently executing,
connections (where possible) will outperform single streamed processes
everytime.
* Make Use of TEMPDB for Intermediate Storage of Useful Data

Asynchronous Queries

Many, if not most, applications and 3rd Party tools are coded to send queries
with the DB-Library call dbsqlexec( ) which is a synchronous call ! It sends a
query and then waits for a response from ASE that the query has completed !

Designing your applications for asynchronous queries provides many benefits:

1. A "Cooperative" multi-tasking application design under Windows will allow
users to run other Windows applications while your long queries are
processed !
2. Provides design opportunities to parallize work across multiple ASE
connections.

Implementation Choices:

* System 10 Client Library Applications:
* True asynchronous behaviour is built into the entire library. Through the
appropriate use of call-backs, asynchronous behavior is the normal
processing paradigm.
* Windows DB-Library Applications (not true async but polling for data):
* Use dbsqlsend(), dbsqlok(), and dbdataready() in conjunction with some
additional code in WinMain() to pass control to a background process. Code
samples which outline two different Windows programming approaches (a
PeekMessage loop and a Windows Timer approach) are available in the
Microsoft Software Library on Compuserve (GO MSL). Look for SQLBKGD.ZIP
* Non-PC DB-Library Applications (not true async but polling for data):
* Use dbsqlsend(), dbsqlok(), and dbpoll() to utilize non-blocking functions.

Generating Sequential Numbers Many applications use unique sequentially
increasing numbers, often as primary keys. While there are good benefits to
this approach, generating these keys can be a serious contention point if not
careful. For a complete discussion of the alternatives, download Malcolm
Colton's White Paper on Sequential Keys from the SQL Server Library of our
OpenLine forum on Compuserve.

The two best alternatives are outlined below.

1. "Primary Key" Table Storing Last Key Assigned
+ Minimize contention by either using a seperate "PK" table for each user
table or padding out each row to a page. Make sure updates are
"in-place".
+ Don't include the "PK" table's update in the same transaction as the
INSERT. It will serialize the transactions.
BEGIN TRAN

UPDATE pk_table SET nextkey = nextkey + 1
[WHERE table_name = @tbl_name]
COMMIT TRAN

/* Now retrieve the information */
SELECT nextkey FROM pk_table
WHERE table_name = @tbl_name]

+ "Gap-less" sequences require additional logic to store and retrieve
rejected values
2. IDENTITY Columns (v10.0 only)
+ Last key assigned for each table is stored in memory and automatically
included in all INSERTs (BCP too). This should be the method of choice
for performance.
+ Choose a large enough numeric or else all inserts will stop once the
max is hit.
+ Potential rollbacks in long transactions may cause gaps in the sequence
!

Other Application Issues

+ Transaction Logging Can Bottleneck Some High Transaction Environments
+ Committing a Transaction Must Initiate a Physical Write for
Recoverability
+ Implementing multiple statements as a transaction can assist in these
environment by minimizing the number of log writes (log is flushed to
disk on commits).
+ Utilizing the Client Machine's Processing Power Balances Load
+ Client/Server doesn't dictate that everything be done on Server!
+ Consider moving "presentation" related tasks such as string or
mathematical manipulations, sorting, or, in some cases, even
aggregating to the client.
+ Populating of "Temporary" Tables Should Use "SELECT INTO" - balance
this with dynamic creation of temporary tables in an OLTP environment.
Dynamic creation may cause blocks in your tempdb.
+ "SELECT INTO" operations are not logged and thus are significantly
faster than there INSERT with a nested SELECT counterparts.
+ Consider Porting Applications to Client Library Over Time
+ True Asynchronous Behavior Throughout Library
+ Array Binding for SELECTs
+ Dynamic SQL
+ Support for ClientLib-initiated callback functions
+ Support for Server-side Cursors
+ Shared Structures with Server Library (Open Server 10)

Physical Database Design Issues

+ Normalized -vs- Denormalized Design
+ Index Selection
+ Promote "Updates-in-Place" Design
+ Promote Parallel I/O Opportunities

Normalized -vs- Denormalized

+ Always Start with a Completely Normalized Database
+ Denormalization should be an optimization taken as a result of a
performance problem
+ Benefits of a normalized database include :
1. Accelerates searching, sorting, and index creation since tables are
narrower
2. Allows more clustered indexes and hence more flexibility in tuning
queries, since there are more tables ;
3. Accelerates index searching since indexes tend to be narrower and
perhaps shorter ;
4. Allows better use of segments to control physical placement of
tables ;
5. Fewer indexes per table, helping UPDATE, INSERT, and DELETE
performance ;
6. Fewer NULLs and less redundant data, increasing compactness of the
database ;
7. Accelerates trigger execution by minimizing the extra integrity
work of maintaining redundant data.
8. Joins are Generally Very Fast Provided Proper Indexes are Available
9. Normal caching and cindextrips parameter (discussed in Server
section) means each join will do on average only 1-2 physical I/Os.
10. Cost of a logical I/O (get page from cache) only 1-2 milliseconds.
3. There Are Some Good Reasons to Denormalize
1. All queries require access to the "full" set of joined data.
2. Majority of applications scan entire tables doing joins.
3. Computational complexity of derived columns require storage for SELECTs
4. Others ...

Index Selection

+ Without a clustered index, all INSERTs and "out-of-place" UPDATEs go to
the last page. The lock contention in high transaction environments
would be prohibitive. This is also true for INSERTs to a clustered
index on a monotonically increasing key.
+ High INSERT environments should always cluster on a key which provides
the most "randomness" (to minimize lock / device contention) that is
usable in many queries. Note this is generally not your primary key !
+ Prime candidates for clustered index (in addition to the above) include
:
o Columns Accessed by a Range
o Columns Used with Order By, Group By, or Joins
+ Indexes Help SELECTs and Hurt INSERTs
+ Too many indexes can significantly hurt performance of INSERTs and
"out-of-place" UPDATEs.
+ Prime candidates for nonclustered indexes include :
o Columns Used in Queries Requiring Index Coverage
o Columns Used to Access Less than 20% (rule of thumb) of the Data.
+ Unique indexes should be defined as UNIQUE to help the optimizer
+ Minimize index page splits with Fillfactor (helps concurrency and
minimizes deadlocks)
+ Keep the Size of the Key as Small as Possible
+ Accelerates index scans and tree traversals
+ Use small datatypes whenever possible . Numerics should also be used
whenever possible as they compare faster than strings.

Promote "Update-in-Place" Design

+ "Update-in-Place" Faster by Orders of Magnitude
+ Performance gain dependent on number of indexes. Recent benchmark (160
byte rows, 1 clustered index and 2 nonclustered) showed 800%
difference!
+ Alternative ("Out-of-Place" Update) implemented as a physical DELETE
followed by a physical INSERT. These tactics result in:
1. Increased Lock Contention
2. Increased Chance of Deadlock
3. Decreased Response Time and Throughput
+ Currently (System 10 and below), Rules for "Update-in-Place" Behavior
Include :
1. Columns updated can not be variable length or allow nulls
2. Columns updated can not be part of an index used to locate the row
to update
3. No update trigger on table being updated (because the inserted and
deleted tables used in triggers get their data from the log)


In v4.9.x and below, only one row may be affected and the
optimizer must know this in advance by choosing a UNIQUE index.
System 10 eliminated this limitation.

Promote Parallel I/O Opportunities

+ For I/O-bound Multi-User Systems, Use A lot of Logical and Physical
Devices
+ Plan balanced separation of objects across logical and physical
devices.
+ Increased number of physical devices (including controllers) ensures
physical bandwidth
+ Increased number of logical Sybase devices ensures minimal contention
for internal resources. Look at SQL Monitor's Device I/O Hit Rate for
clues. Also watch out for the 128 device limit per database.
+ Create Database (in v10) starts parallel I/O on up to 6 devices at a
time concurrently. If taken advantage of, expect an 800% performance
gain. A 2Gb TPC-B database that took 4.5 hours under 4.9.1 to create
now takes 26 minutes if created on 6 independent devices !
+ Use Sybase Segments to Ensure Control of Placement


This is the only way to guarantee logical seperation of objects on
devices to reduce contention for internal resources.

+ Dedicate a seperate physical device and controller to the transaction
log in tempdb too.
+ optimize TEMPDB Also if Heavily Accessed
+ increased number of logical Sybase devices ensures minimal contention
for internal resources.
+ systems requiring increased log throughput today must partition
database into separate databases

Breaking up one logical database into multiple smaller databases
increases the number number of transaction logs working in parallel.

Networking Issues

+ Choice of Transport Stacks
+ Variable Sized TDS Packets
+ TCP/IP Packet Batching

Choice of Transport Stacks for PCs

+ Choose a Stack that Supports "Attention Signals" (aka. "Out of Band
Data")
+ Provides for the most efficient mechanism to cancel queries.
+ Essential for sites providing ad-hoc query access to large databases.
+ Without "Attention Signal" capabilities (or the urgent flag in the
connection string), the DB-Library functions DBCANQUERY ( ) and
DBCANCEL ( ) will cause ASE to send all rows back to the Client
DB-Library as quickly as possible so as to complete the query. This can
be very expensive if the result set is large and, from the user's
perspective, causes the application to appear as though it has hung.
+ With "Attention Signal" capabilities, Net-Library is able to send an
out-of-sequence packet requesting the ASE to physically throw away any
remaining results providing for instantaneous response.
+ Currently, the following network vendors and associated protocols
support the an "Attention Signal" capable implementation:
1. NetManage NEWT
2. FTP TCP
3. Named Pipes (10860) - Do not use urgent parameter with this Netlib
4. Novell LAN Workplace v4.1 0 Patch required from Novell
5. Novell SPX - Implemented internally through an "In-Band" packet
6. Wollongong Pathway
7. Microsoft TCP - Patch required from Microsoft

Variable-sized TDS Packets

Pre-v4.6 TDS Does Not Optimize Network Performance Current ASE TDS packet
size limited to 512 bytes while network frame sizes are significantly
larger (1508 bytes on Ethernet and 4120 bytes on Token Ring).

The specific protocol may have other limitations!

For example:
+ IPX is limited to 576 bytes in a routed network.
+ SPX requires acknowledgement of every packet before it will send
another. A recent benchmark measured a 300% performance hit over TCP in
"large" data transfers (small transfers showed no difference).
+ Open Client Apps can "Request" a Larger Packet Shown to have
significant performance improvement on "large" data transfers such as
BCP, Text / Image Handling, and Large Result Sets.
o clients:
# isql -Usa -Annnnn
# bcp -Usa -Annnnn
# ct_con_props (connection, CS_SET, CS_PACKETSIZE, &packetsize,
sizeof(packetsize), NULL)
o An "SA" must Configure each Servers' Defaults Properly
# sp_configure "default packet size", nnnnn - Sets default packet
size per client connection (defaults to 512)
# sp_configure "maximum packet size", nnnnn - Sets maximum TDS
packet size per client connection (defaults to 512)
# sp_configure "additional netmem", nnnnn - Additional memory for
large packets taken from separate pool. This memory does not
come from the sp_configure memory setting.

Optimal value = ((# connections using large packets large
packetsize * 3) + an additional 1-2% of the above calculation
for overhead)

Each connection using large packets has 3 network buffers: one
to read; one to write; and one overflow.
@ Default network memory - Default-sized packets come from
this memory pool.
@ Additional Network memory - Big packets come this memory
pool.

If not enough memory is available in this pool, the server
will give a smaller packet size, down to the default

TCP/IP Packet Batching

+ TCP Networking Layer Defaults to "Packet Batching"
+ This means that TCP/IP will batch small logical packets into one larger
physical packet by briefly delaying packets in an effort to fill the
physical network frames (Ethernet, Token-Ring) with as much data as
possible.
+ Designed to improve performance in terminal emulation environments
where there are mostly only keystrokes being sent across the network.
+ Some Environments Benefit from Disabling Packet Batching
+ Applies mainly to socket-based networks (BSD) although we have seen
some TLI networks such as NCR's benefit.
+ Applications sending very small result sets or statuses from sprocs
will usually benefit. Benchmark with your own application to be sure.
+ This makes ASE open all connections with the TCP_NODELAY option.
Packets will be sent regardless of size.
+ To disable packet batching, in pre-Sys 11, start ASE with the 1610
Trace Flag.


$SYBASE/dataserver -T1610 -d /usr/u/sybase/master.dat ...

Your errorlog will indicate the use of this option with the message:

ASE booted with TCP_NODELAY enabled.

Operating System Issues

+ Never Let ASE Page Fault
+ It is better to configure ASE with less memory and do more physical
database I/O than to page fault. OS page faults are synchronous and
stop the entire dataserver engine until the page fault completes. Since
database I/O's are asynchronous, other user tasks can continue!
+ Use Process Affinitying in SMP Environments, if Supported
+ Affinitying dataserver engines to specific CPUs minimizes overhead
associated with moving process information (registers, etc) between
CPUs. Most implementations will preference other tasks onto other CPUs
as well allowing even more CPU time for dataserver engines.
+ Watch out for OS's which are not fully symmetric. Affinitying
dataserver engines onto CPUs that are heavily used by the OS can
seriously degrade performance. Benchmark with your application to find
optimal binding.
+ Increase priority of dataserver engines, if supported
+ Give ASE the opportunity to do more work. If ASE has nothing to do, it
will voluntarily yield the CPU.
+ Watch out for OS's which externalize their async drivers. They need to
run too!
+ Use of OS Monitors to Verify Resource Usage
+ The OS CPU monitors only "know" that an instruction is being executed.
With ASE's own threading and scheduling, it can routinely be 90% idle
when the OS thinks its 90% busy. SQL Monitor shows real CPU usage.
+ Look into high disk I/O wait time or I/O queue lengths. These indicate
physical saturation points in the I/O subsystem or poor data
distribution.
+ Disk Utilization above 50% may be subject to queuing effects which
often manifest themselves as uneven response times.
+ Look into high system call counts which may be symptomatic of problems.
+ Look into high context switch counts which may also be symptomatic of
problems.
+ Optimize your kernel for ASE (minimal OS file buffering, adequate
network buffers, appropriate KEEPALIVE values, etc).
+ Use OS Monitors and SQL Monitor to Determine Bottlenecks
+ Most likely "Non-Application" contention points include:
Resource Where to Look
--------- --------------
CPU Performance SQL Monitor - CPU and Trends

Physical I/O Subsystem OS Monitoring tools - iostat, sar...

Transaction Log SQL Monitor - Device I/O and
Device Hit Rate
on Log Device

ASE Network Polling SQL Monitor - Network and Benchmark
Baselines

Memory SQL Monitor - Data and Cache
Utilization

+ Use of Vendor-support Striping such as LVM and RAID
+ These technologies provide a very simple and effective mechanism of
load balancing I/O across physical devices and channels.
+ Use them provided they support asynchronous I/O and reliable writes.
+ These approaches do not eliminate the need for Sybase segments to
ensure minimal contention for internal resources.
+ Non-read-only environments should expect performance degradations when
using RAID levels other than level 0. These levels all include fault
tolerance where each write requires additional reads to calculate a
"parity" as well as the extra write of the parity data.

Hardware Configuration Issues

+ Number of CPUs
+ Use information from SQL Monitor to assess ASE's CPU usage.
+ In SMP environments, dedicate at least one CPU for the OS.
+ Advantages and scaling of VSA is application-dependent. VSA was
architected with large multi-user systems in mind.
+ I/O Subsystem Configuration
+ Look into high Disk I/O Wait Times or I/O Queue Lengths. These may
indicate physical I/O saturation points or poor data distribution.
+ Disk Utilization above 50% may be subject to queuing effects which
often manifest themselves as uneven response times.
+ Logical Volume configurations can impact performance of operations such
as create database, create index, and bcp. To optimize for these
operations, create Logical Volumes such that they start on different
channels / disks to ensure I/O is spread across channels.
+ Discuss device and controller throughput with hardware vendors to
ensure channel throughput high enough to drive all devices at maximum
rating.

General ASE Tuning

+ Changing Values with sp_configure or buildmaster


It is imperative that you only use sp_configure to change those
parameters that it currently maintains because the process of
reconfiguring actually recalculates a number of other buildmaster
parameters. Using the Buildmaster utility to change a parameter
"managed" by sp_configure may result in a mis-configured server and
cause adverse performance or even worse ...

+ Sizing Procedure Cache
o ASE maintains an MRU-LRU chain of stored procedure query plans. As
users execute sprocs, ASE looks in cache for a query plan to use.
However, stored procedure query plans are currently not re-entrant!
If a query plan is available, it is placed on the MRU and execution
begins. If no plan is in memory, or if all copies are in use, a new
copy is read from the sysprocedures table. It is then optimized and
put on the MRU for execution.
o Use dbcc memusage to evaluate the size and number of each sproc
currently in cache. Use SQL Monitor's cache statistics to get your
average cache hit ratio. Ideally during production, one would hope
to see a high hit ratio to minimize the procedure reads from disk.
Use this information in conjuction with your desired hit ratio to
calculate the amount of memory needed.
+ Memory
o Tuning memory is more a price/performance issue than anything else
! The more memory you have available, the greater than probability
of minimizing physical I/O. This is an important goal though. Not
only does physical I/O take significantly longer, but threads doing
physical I/O must go through the scheduler once the I/O completes.
This means that work on behalf of the thread may not actually
continue to execute for quite a while !
o There are no longer (as of v4.8) any inherent limitations in ASE
which cause a point of diminishing returns on memory size.
o Calculate Memory based on the following algorithm :


Total Memory = Dataserver Executable Size (in bytes) +
Static Overhead of 1 Mb +
User Connections x 40,960 bytes +
Open Databases x 644 bytes +
Locks x 32 bytes +
Devices x 45,056 bytes +
Procedure Cache +
Data Cache

+ Recovery Interval
o As users change data in ASE, only the transaction log is written to
disk right away for recoverability. "Dirty" data and index pages
are kept in cache and written to disk at a later time. This
provides two major benefits:
1. Many transactions may change a page yet only one physical write
is done
2. ASE can schedule the physical writes "when appropriate"
o ASE must eventually write these "dirty" pages to disk.
o A checkpoint process wakes up periodically and "walks" the cache
chain looking for dirty pages to write to disk
o The recovery interval controls how often checkpoint writes dirty
pages.
+ Tuning Recovery Interval
o A low value may cause unnecessary physical I/O lowering throughput
of the system. Automatic recovery is generally much faster during
boot-up.
o A high value minimizes unnecessary physical I/O and helps
throughput of the system. Automatic recovery may take substantial
time during boot-up.

Audit Performance Tuning for v10.0

+ Potentially as Write Intensive as Logging
+ Isolate Audit I/O from other components.
+ Since auditing nearly always involves sequential writes, RAID Level 0
disk striping or other byte-level striping technology should provide
the best performance (theoretically).
+ Size Audit Queue Carefully
+ Audit records generated by clients are stored in an in memory audit
queue until they can be processed.
+ Tune the queue's size with sp_configure "audit queue size", nnnn (in
rows).
+ Sizing this queue too small will seriously impact performance since all
user processes who generate audit activity will sleep if the queue
fills up.
+ Size Audit Database Carefully
+ Each audit row could require up to 416 bytes depending on what is
audited.
+ Sizing this database too small will seriously impact performance since
all user processes who generate audit activity will sleep if the
database fills up.

Back to top

-------------------------------------------------------------------------------

1.5.2: Temp Tables and OLTP

-------------------------------------------------------------------------------

(Note from Ed: It appears that with ASE 12, Sybase have solved the problem of
select/into locking the system tables for the duration of the operation. The
operation is now split into two parts, the creation of the table followed byt
the insert. The system tables are only locked for the first part, and so, to
all intents and purposes, the operation acts like a create/insert pair whilst
remaining minimally logged.

Our shop would like to inform folks of a potential problem when using temporary
tables in an OLTP environment. Using temporary tables dynamically in a OLTP
production environment may result in blocking (single-threading) as the number
of transactions using the temporary tables increases.

Does it affect my application?

This warning only applies for SQL that is being invoked frequently in an OLTP
production environment, where the use of "select into..." or "create table #
temp" is common. Application using temp tables may experience blocking problems
as the number of transactions increases.

This warning does not apply to SQL that may be in a report or that is not used
frequently. Frequently is defined as several times per second.

Why? Why? Why?

Our shop was working with an application owner to chase down a problem they
were having during peak periods. The problem they were having was severe
blocking in tempdb.

What was witnessed by the DBA group was that as the number of transactions
increased on this particular application, the number of blocks in tempdb also
increased.

We ran some independent tests to simulate a heavily loaded server and
discovered that the data pages in contention were in tempdb's syscolumns table.

This actually makes sense because during table creation entries are added to
this table, regardless if it's a temporary or permanent table.

We ran another simulation where we created the tables before the stored
procedure used it and the blocks went away. We then performed an additional
test to determine what impact creating temporary tables dynamically would have
on the server and discovered that there is a 33% performance gain by creating
the tables once rather than re-creating them.

Your mileage may vary.

How do I fix this?

To make things better, do the 90's thing -- reduce and reuse your temp tables.
During one application connection/session, aim to create the temp tables only
once.

Let's look at the lifespan of a temp table. If temp tables are created in a
batch within a connection, then all future batches and stored procs will have
access to such temp tables until they're dropped; this is the reduce and reuse
strategy we recommend. However, if temp tables are created in a stored proc,
then the database will drop the temp tables when the stored proc ends, and this
means repeated and multiple temp table creations; you want to avoid this.

Recode your stored procedures so that they assume that the temporary tables
already exist, and then alter your application so that it creates the temporary
tables at start-up -- once and not every time the stored procedure is invoked.

That's it! Pretty simple eh?

Summary

The upshot is that you can realize roughly a 33% performance gain and not
experience the blocking which is difficult to quantify due to the specificity
of each application.

Basically, you cannot lose.

Solution in pseudo-code

If you have an application that creates the same temp table many times within
one connection, here's how to convert it to reduce and reuse temp table
creations. Raymond Lew has supplied a detailed example for trying this.

Old

open connection
loop until time to go
exec procedure vavoom_often
/* vavoom_often creates and uses #gocart for every call */
/* eg: select * into #gocart from gocart */
go
.
.
.
loop-end
close connection

New

open connection
/* Create the temporary table outside of the sproc */
select * into #gocart from gocart where 1 =2 ;
go
loop until time to go
exec procedure vavoom_often
/* vavoom_often reuses #gocart which */
/* was created before exec of vavoom_often */
/* - First statement may be a truncate table #gocart */
/* - Execute with recompile */
/* if your table will have more than 10 data pages */
/* as the optimizer will assume 10 data pages for temp tables */
go
.
.
.
loop-end
close connection

Note that it is necessary to call out the code to create the table and it
becomes a pain in the butt because the create-table statement will have to be
replicated in any stored proc and in the initialization part of the application
- this can be a maintenance nuisance. This can be solved by using any macro
package such as m4 or cpp. or by using and adapting the scripts from Raymond
Lew.

-------------------------------------------------------------------------------

Brian Black posted a stronger notice than this to the SYBASE-L list, and I
would agree, that any use of select/into in a production environments should
looked at very hard. Even in DSS environments, especially if they share tempdb
with an OLTP environment, should use select/into with care.

-------------------------------------------------------------------------------

From: Raymond Lew

At our company, we try to keep the database and the application loosely coupled
to allow independent changes at the frontend or the backend as long as the
interface stays the same. Embedding temp table definitions in the frontend
would make this more difficult.

To get away from having to embed the temp table definitions in the frontend
code, we are storing the temp table definitions in the database. The frontend
programs retrieve the definitions and declare the tables dynamically at the
beginning of each session. This allows for the change of backend procedures
without changes in the frontend when the API does not change.

Enclosed below are three scripts. The first is an isql script to create the
tables to hold the definitions. The second is a shell script to set up a sample
procedure named vavoom. The third is shell script to demonstrate the structure
of application code.

I would like to thank Charles Forget and Gordon Rees for their assistance on
these scripts.

--start of setup------------------------------------------------------
/* Raymond Lew - 1996-02-20 */
/* This isql script will set up the following tables:
gocart - sample table
app_temp_defn - where temp table definitions are stored
app_temp_defn_group - a logical grouping of temp table definitions
for an application function
*/

/******************************/
/* gocart table - sample table*/
/******************************/
drop table gocart
go
create table gocart
(
cartname char(10) null
,cartcolor char(30) null
)
go
create unique clustered index gocart1 on gocart (cartname)
go
insert into gocart values ('go1','blue ')
insert into gocart values ('go2','pink ')
insert into gocart values ('go3','green ')
insert into gocart values ('go4','red ')
go


/****************************************************************/
/* app_temp_defn - definition of temp tables with their indexes */
/****************************************************************/
drop table app_temp_defn
go
create table app_temp_defn
(
/* note: temp tables are unique only in first 13 chars */
objectname char(20) not null
,seq_no smallint not null
,defntext char(255) not null
)
go
create unique clustered index app_temp_defn1
on app_temp_defn (objectname,seq_no)
go
insert into app_temp_defn
values ('#gocart',1,'select * into #gocart')
insert into app_temp_defn
values ('#gocart',2,' from gocart where 1=2 ')
go
insert into app_temp_defn
values ('#gocartindex',1,
"create unique index gocartindex on #gocart (cartname) ")
go
insert into app_temp_defn
values ('#gocart1',1, 'select * into #gocart1 from gocart where 1=2')
go


/***********************************************************************/
/* app_temp_defn_group - groupings of temp definitions by applications */
/***********************************************************************/
drop table app_temp_defn_group
go
create table app_temp_defn_group
(
appname char(8) not null
,objectname char(20) not null
)
go
create unique clustered index app_temp_defn_group1
on app_temp_defn_group (appname,objectname)
go
insert into app_temp_defn_group values('abc','#gocart')
insert into app_temp_defn_group values('abc','#gocartindex')
go

/***********************************************************/
/* get_temp_defn - proc for getting the temp defn by group */
/***********************************************************/
drop procedure get_temp_defn
go
create procedure get_temp_defn
(
@appname char(8)
)
as

if @appname = ''
select defntext
from app_temp_defn
order by objectname, seq_no
else
select defntext
from app_temp_defn a
, app_temp_defn_group b
where a.objectname = b.objectname
and b.appname = @appname
order by a.objectname, a.seq_no

return
go

/* let's try some tests */
exec get_temp_defn ''
go
exec get_temp_defn 'abc'
go
--end of setup --------------------------------------------------


--- start of make.vavoom --------------------------------------------
#!/bin/sh
# Raymond Lew - 1996-02-20
#
# bourne shell script for creating stored procedures using
# app_temp_defn table
#
# demo procedure vavoom created here
#
# note: you have to change the passwords, id and etc. for your site
# note: you might have to some inline changes to make this work
# check out the notes within the body


# get the table defn's into a text file
#
# note: next line :you will need to end the line immediately after eot \
isql -Ukryten -Pjollyguy -Sstarbug -w255 << eot \
| grep -v '\-\-\-\-' | grep -v 'defntext ' | grep -v ' affected' > tabletext
exec get_temp_defn ''
go
eot
# note: prev line :you will need to have a newline immediately after eot

# go mess around in vi
vi tabletext

#
# create the proc vavoom after running the temp defn's into db
#
isql -Ukryten -Pjollyguy -Sstarbug -e << eot |more
`cat tabletext`
go
drop procedure vavoom
go
create procedure vavoom
(
@color char(10)
)
as
truncate table #gocart1 /* who knows what lurks in temp tables */
if @color = ''
insert #gocart1 select * from gocart
else
insert #gocart1 select * from gocart where cartcolor=@color
select @color '@color', * from #gocart1
return
go
exec vavoom ''
go
exec vavoom 'blue'
go
eot
# note: prev line :you will need to have a newline immediately after eot

exit
# end of unix script
--- end of make.vavoom --------------------------------------------

--- start of defntest.sh -------------------------------------------
#!/bin/sh
# Raymond Lew 1996-02-01
#
# test script: demonstrate with a bourne shell how an application
# would use the temp table definitions stored in the database
#
# note: you must run setup and make.vavoom first
#
# note: you have to change the passwords, id and etc. for your site
# note: you might have to some inline changes to make this work
# check out the notes within the body

# get the table defn's into a text file
#
# note: next line :you will need to end the line immediately after eot \
isql -Ukryten -Pjollyguy -Sstarbug -w255 << eot \
| grep -v '\-\-\-\-' | grep -v 'defntext ' | grep -v ' affected' > tabletext
exec get_temp_defn ''
go
eot
# note: prev line :you will need to have a newline immediately after eot

# go mess around in vi
vi tabletext

isql -Ukryten -Pjollyguy -Sstarbug -e << eot | more
`cat tabletext`
go
exec vavoom ''
go
exec vavoom 'blue'
go
eot
# note: prev line :you will need to have a newline immediately after eot

exit
# end of unix script
--- end of defntest.sh -------------------------------------------


That's all, folks. Have Fun

Back to top

-------------------------------------------------------------------------------

1.5.3: Differences between clustered and non-clustered

-------------------------------------------------------------------------------

Preface

I'd like to talk about the difference between a clustered and a non-clustered
index. The two are very different and it's very important to understand the
difference between the two to in order to know when and how to use each.

I've pondered hard to find the best analogy that I could think of and I've come
up with ... the phone book. Yes, a phone book.

Imagine that each page in our phone book is equivalent to a Sybase 2K data
page. Every time we read a page from our phone book it is equivalent to one
disk I/O.

Since we are imagining, let's also imagine that our mythical ASE (that runs
against the phone book) has only enough data cache to buffer 200 phone pages.
When our data cache gets full we have to flush an old page out so we can read
in a new one.

Fasten your seat belts, because here we go...

Clustered Index

A phone book lists everyone by last name. We have an A section, we have a B
section and so forth. Within each section my phone book is clever enough to
list the starting and ending names for the given page.

The phone book is clustered by last name.

create clustered index on phone_book (last_name)

It's fast to perform the following queries on the phone book:

* Find the address of those whose last name is Cisar.
* Find the address of those whose last name is between Even and Fa

Searches that don't work well:

* Find the address of those whose phone number is 440-1300.
* Find the address of those whose prefix is 440

In order to determine the answer to the two above we'd have to search the
entire phone book. We can call that a table scan.

Non-Clustered Index

To help us solve the problem above we can build a non-clustered index.

create nonclustered index on phone_book (phone_number)

Our non-clustered index will be built and maintained by our Mythical ASE as
follows:

1. Create a data structure that will house a phone_number and information
where the phone_number exists in the phone book: page number and the row
within the page.

The phone numbers will be kept in ascending order.

2. Scan the entire phone book and add an entry to our data structure above for
each phone number found.
3. For each phone number found, note along side it the page number that it was
located and which row it was in.

any time we insert, update or delete new numbers, our M-ASE will maintain this
secondary data structure. It's such a nice Server.

Now when we ask the question:

Find the address of those whose phone number is 440-1300

we don't look at the phone book directly but go to our new data structure and
it tells us which page and row within the page the above phone number can be
found. Neat eh?

Draw backs? Well, yes. Because we probably still can't answer the question:

Find the address of those whose prefix is 440

This is because of the data structure being used to implement non-clustered
indexes. The structure is a list of ordered values (phone numbers) which point
to the actual data in the phone book. This indirectness can lead to trouble
when a range or a match query is issued.

The structure may look like this:

------------------------------------
|Phone Number | Page Number/Row |
====================================
| 440-0000 | 300/23 |
| 440-0001 | 973/45 |
| 440-0002 | 23/2 |
| ... | |
| 440-0030 | 973/45 |
| 440-0031 | 553/23 |
| ... | |
------------------------------------

As one can see, certain phone numbers may map to the same page. This makes
sense, but we need to consider one of our constraints: our Server only has room
for 200 phone pages.

What may happen is that we re-read the same phone page many times. This isn't a
problem if the phone page is in memory. We have limited memory, however, and we
may have to flush our memory to make room for other phone pages. So the
re-reading may actually be a disk I/O.

The Server needs to decide when it's best to do a table scan versus using the
non-clustered index to satisfy mini-range type of queries. The way it decides
this is by applying a heuristic based on the information maintained when an
update statistics is performed.

In summary, non-clustered indexes work really well when used for highly
selective queries and they may work for short, range type of queries.

Suggested Uses

Having suffered many table corruption situations (with 150 ASEs who wouldn't? :
-)), I'd say always have a clustered index. With a clustered index you can fish
data out around the bad spots on the table thus having minimal data loss.

When you cluster, build the cluster to satisfy the largest percentage of range
type queries. Don't put the clustered index on your primary key because
typically primary keys are increasing linearly. What happens is that you end up
inserting all new rows at the end of the table thus creating a hot spot on the
last data page.

For detail rows, create the clustered index on the commonly accessed foreign
key. This will aid joins from the master to it.

Use nonclustered index to aid queries where your selection is very selective.
For example, primary keys. :-)

Back to top

-------------------------------------------------------------------------------

1.5.4: Optimistic versus Pessimistic locking?

-------------------------------------------------------------------------------

This is the same problem another poster had ... basically locking a record to
ensure that it hasn't changed underneath ya.

fca...@ix.netcom.com has a pretty nifty solution if you are using ct-lib (I'll
include that below -- hope it's okay Francisco ... :-)) ...

Basically the problem you are facing is one of being a pessimist or an
optimist.

I contend that your business really needs to drive this.

Most businesses (from my experience) can be optimistic.

That is, if you are optimistic that the chances that someone is going to change
something from underneath the end-user is low, then do nothing about it.

On the other hand, if you are pessimistic that someone may change something
underneath the end-user, you can solve it at least as follows:

Solution #1

Use a timestamp on a header table that would be shared by the common data. This
timestamp field is a Sybase datatype and has nothing to do with the current
time. Do not attempt to do any operations on this column other than
comparisons. What you do is when you grab data to present to the end-user, have
the client software also grab the timestamp column value. After some thing
time, if the end-user wishes to update the database, compare the client
timestamp with what's in the database and it it's changed, then you can take
appropriate action: again this is dictated by the business.

Problem #1

If users are sharing tables but columns are not shared, there's no way to
detect this using timestamps because it's not sufficiently granular.

Solution #2 (presented by fcasas)

... Also are you coding to ct-lib directly? If so there's something that you
could have done, or may still be able to do if you are using cursors.

With ct-lib there's a ct_describe function that lets you see key data. This
allows you to implement optimistic locking with cursors and not need
timestamps. Timestamps are nice, but they are changed when any column on a row
changes, while the ct_describe mechanism detects changes at the columns level
for a greater degree of granularity of the change. In other words, the
timestamp granularity is at the row, while ct_describes CS_VERSION_KEY provides
you with granularity at the column level.

Unfortunately this is not well documented and you will have to look at the
training guide and the manuals very closely.

Further if you are using cursors do not make use of the

[for {read only | update [of column_name_list]}]

of the select statement. Omitting this clause will still get you data that can
still be updated and still only place a shared lock on the page. If you use the
read only clause you are acquiring shared locks, but the cursor is not
updatable. However, if you say

update [of ...

will place updated locks on the page, thus causing contention. So, if you are
using cursors don't use the above clause. So, could you answer the following
three questions:

1. Are you using optimistic locking?
2. Are you coding to ct-lib?
3. Are you using cursors?

Problem #2

You need to be coding with ct-lib ...

Solution #3

Do nothing and be optimistic. We do a lot of that in our shop and it's really
not that big of a problem.

Problem #3

Users may clobber each other's changes ... then they'll come looking for you to
clobber you! :-)

Back to top

-------------------------------------------------------------------------------

1.5.5: How do I force an index to be used?

-------------------------------------------------------------------------------

System 11

In System 11, the binding of the internal ordinal value is alleviated so that
instead of using the ordinal index value, the index name can be used instead:

select ... from my_table (index my_first_index)

Sybase 4.x and Sybase System 10

All indexes have an ordinal value assigned to them. For example, the following
query will return the ordinal value of all the indexes on my_table:

select name, indid
from sysindexes
where id = object_id("my_table")

Assuming that we wanted to force the usuage of index numbered three:

select ... from my_table(3)

Note: using a value of zero is equivalent to forcing a table scan. Whilst this
sounds like a daft thing to do, sometimes a table scan is a better solution
than heavy index scanning.

It is essential that all index hints be well documented. This is good DBA
practice. It is especially true for Sybase System 10 and below.

One scheme that I have used that works quite well is to implement a table
similar to sysdepends in the database that contains the index hints.

create table idxdepends
(
tblname varchar(32) not null -- Table being hinted
,depname varchar(50) not null -- Proc, trigger or app that
-- contains hint.
,idxname varchar(32) not null -- Index being hinted at
--,hintcount int null -- You may want to count the
-- number of hints per proc.
)

Obviously it is a manual process to keep the table populated, but it can save a
lot of trouble later on.

Back to top

-------------------------------------------------------------------------------

1.5.6: Why place tempdb and log on low numbered devices?

-------------------------------------------------------------------------------

System 10 and below.

In System 10 and Sybase 4.X, the I/O scheduler starts at logical device (ldev)
zero and works up the ldev list looking for outstanding I/O's to process.
Taking this into consideration, the following device fragments (disk init)
should be added before any others:

1. tempdb
2. log

Back to top

-------------------------------------------------------------------------------

David Owen

unread,
Apr 20, 2004, 9:45:05 AM4/20/04
to
Archive-name: databases/sybase-faq/part6

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Advanced ASE Administration

1.3.1 How do I clear a log suspend'd connection?
1.3.2 What's the best value for cschedspins?
1.3.3 What traceflags are available?
1.3.4 How do I use traceflags 5101 and 5102?
1.3.5 What is cmaxpktsz good for?
1.3.6 What do all the parameters of a buildmaster -d<device> -yall mean?
1.3.7 What is CIS and how do I use it?
1.3.8 If the master device is full how do I make the master database
bigger?
1.3.9 How do I run multiple versions of Sybase on the same server?
1.3.10 How do I capture a process's SQL?

General Troubleshooting User Database Administration ASE FAQ

-------------------------------------------------------------------------------

1.3.1 How to clear a log suspend

-------------------------------------------------------------------------------

A connection that is in a log suspend state is there because the transaction
that it was performing couldn't be logged. The reason it couldn't be logged is
because the database transaction log is full. Typically, the connection that
caused the log to fill is the one suspended. We'll get to that later.

In order to clear the problem you must dump the transaction log. This can be
done as follows:

dump tran db_name to data_device
go

At this point, any completed transactions will be flushed out to disk. If you
don't care about the recoverability of the database, you can issue the
following command:

dump tran db_name with truncate_only

If that doesn't work, you can use the with no_log option instead of the with
truncate_only.

After successfully clearing the log the suspended connection(s) will resume.

Unfortunately, as mentioned above, there is the situation where the connection
that is suspended is the culprit that filled the log. Remember that dumping the
log only clears out completed transaction. If the connection filled the log
with one large transaction, then dumping the log isn't going to clear the
suspension.

System 10

What you need to do is issue an ASE kill command on the connection and then
un-suspend it:

select lct_admin("unsuspend", db_id("db_name"))

System 11

See Sybase Technical News Volume 6, Number 2

Retaining Pre-System 10 Behaviour

By setting a database's abort xact on log full option, pre-System 10 behaviour
can be retained. That is, if a connection cannot log its transaction to the log
file, it is aborted by ASE rather than suspended.

Return to top

-------------------------------------------------------------------------------

1.3.2 What's the best value for cschedspins?

-------------------------------------------------------------------------------

It is crucial to understand that cschedspins is a tunable parameter
(recommended values being between 1-2000) and the optimum value is completely
dependent on the customer's environment. cschedspins is used by the scheduler
only when it finds that there are no runnable tasks. If there are no runnable
tasks, the scheduler has two options:

1. Let the engine go to sleep (which is done by an OS call) for a specified
interval or until an event happens. This option assumes that tasks won't
become runnable because of tasks executing on other engines. This would
happen when the tasks are waiting for I/O more than any other resource such
as locks. Which means that we could free up the CPU resource (by going to
sleep) and let the system use it to expedite completion of system tasks
including I/O.
2. Go and look for a ready task again. This option assumes that a task would
become runnable in the near term and so incurring the extra cost of an OS
context switch through the OS sleep/wakeup mechanism is unacceptable. This
scenario assumes that tasks are waiting on resources such as locks, which
could free up because of tasks executing on other engines, more than they
wait for I/O.

cschedspins controls how many times we would choose option 2 before choosing
option 1. Setting cschedspins low favours option 1 and setting it high favours
option 2. Since an I/O intensive task mix fits in with option 1, setting
cschedspins low may be more beneficial. Similarly since a CPU intensive job mix
favours option 2, setting cschedspins high may be beneficial.

The consensus is that a single CPU server should have cschedspins set to 1.
However, I strongly recommend that users carefully test values for cschedspins
and monitor the results closely. I have seen more than one site that has shot
themselves in the foot so to speak due to changing this parameter in production
without a good understanding of their environment.

Return to top

-------------------------------------------------------------------------------

1.3.3 Trace Flag Definitions

-------------------------------------------------------------------------------

To activate trace flags, add them to the RUN_* script. The following example is
using the 1611 and 260 trace flags. Note that there is no space between the
'-T' and the traceflag, despite what is written in some documentation.

Use of these traceflags is not recommended by Sybase. Please use at your
own risk.

% cd ~sybase/install
% cat RUN_BLAND
#!/bin/sh
#
# SQL Server Information:
# name: BLAND
# master device: /usr/sybase/dbf/BLAND/master.dat
# master device size: 25600
# errorlog: /usr/sybase/install/errorlog_BLAND
# interfaces: /usr/sybase
#
/usr/sybase/dataserver -d/usr/sybase/dbf/BLAND/master.dat \
-sBLAND -e/usr/sybase/install/errorlog_BLAND -i/usr/sybase \
-T1611 -T260
-------------------------------------------------------------------------------


Trace Flags
+-----------------------------------------------------------------------------+
| | |
|------+----------------------------------------------------------------------|
| Flag | Description |
|------+----------------------------------------------------------------------|
| 108 | (Documented) To allow dynamic and host variables in create view |
| | statements in ASE 12.5 and above. |
|------+----------------------------------------------------------------------|
| 200 | Displays messages about the before image of the query-tree. |
|------+----------------------------------------------------------------------|
| 201 | Displays messages about the after image of the query-tree. |
|------+----------------------------------------------------------------------|
| 241 | Compress all query-trees whenever the SQL dataserver is started. |
|------+----------------------------------------------------------------------|
| | Reduce TDS (Tabular Data Stream) overhead in stored procedures. Turn |
| | off done-in-behaviour packets. Do not use this if your application |
| | is a ct-lib based application; it'll break. |
| 260 | |
| | Why set this on? Glad you asked, typically with a db-lib application |
| | a packet is sent back to the client for each batch executed within a |
| | stored procedure. This can be taxing in a WAN/LAN environment. |
|------+----------------------------------------------------------------------|
| | Changes the hierarchy and casting of datatypes to pre-11.5.1 |
| | behaviour. There was an issue is some very rare cases where a wrong |
| | result could occur, but that's been cleared up in 11.9.2 and above. |
| | |
| 291 | The trace can be used at boot time or at the session level. Keep in |
| | mind that it does not disqualify a table scan from occurring. What |
| | it will do is result in fewer datatype mismatch situations and thus |
| | the optimizer will be able to estimate the costs of SARGs and joins |
| | on columns involved in a mismatch. |
|------+----------------------------------------------------------------------|
| 299 | This trace flag instructs the dataserver to not recompile a child |
| | stored procedure that inherits a temp table from a parent procedure. |
|------+----------------------------------------------------------------------|
| 302 | Print information about the optimizer's index selection. |
|------+----------------------------------------------------------------------|
| 303 | Display OR strategy |
|------+----------------------------------------------------------------------|
| | Revert special or optimizer strategy to that strategy used in |
| 304 | pre-System 11 (this traceflag resolved several bug issues in System |
| | 11, most of these bugs are fixed in ASE 11.0.3.2) |
|------+----------------------------------------------------------------------|
| 310 | Print information about the optimizer's join selection. |
|------+----------------------------------------------------------------------|
| 311 | Display the expected IO to satisfy a query. Like statistics IO |
| | without actually executing. |
|------+----------------------------------------------------------------------|
| 317 | Provide extra optimization information. |
|------+----------------------------------------------------------------------|
| 319 | Reformatting strategies. |
|------+----------------------------------------------------------------------|
| 320 | Turn off the join order heuristic. |
|------+----------------------------------------------------------------------|
| 324 | Turn off the like optimization for ad-hoc queries using |
| | @local_variables. |
|------+----------------------------------------------------------------------|
| | (Only valid in ASE versions prior to 11.9.2.) Instructs the server |
| | to use arithmetic averaging when calculating density instead of a |
| 326 | geometric weighted average when updating statistics. Useful for |
| | building better stats when an index has skew on the leading column. |
| | Use only for updating the stats of a table/index with known skewed |
| | data. |
|------+----------------------------------------------------------------------|
| | |
|------+----------------------------------------------------------------------|
| 602 | Prints out diagnostic information for deadlock prevention. |
|------+----------------------------------------------------------------------|
| 603 | Prints out diagnostic information when avoiding deadlock. |
|------+----------------------------------------------------------------------|
| 699 | Turn off transaction logging for the entire SQL dataserver. |
|------+----------------------------------------------------------------------|
| 1204 | Send deadlock detection to the errorlog. |
| * | |
|------+----------------------------------------------------------------------|
| 1205 | Stack trace on deadlock. |
|------+----------------------------------------------------------------------|
| 1206 | Disable lock promotion. |
|------+----------------------------------------------------------------------|
| 1603 | Use standard disk I/O (i.e. turn off asynchronous I/O). |
| * | |
|------+----------------------------------------------------------------------|
| 1605 | Start secondary engines by hand |
|------+----------------------------------------------------------------------|
| | Create a debug engine start file. This allows you to start up a |
| | debug engine which can access the server's shared memory for running |
| | diagnostics. I'm not sure how useful this is in a production |
| 1606 | environment as the debugger often brings down the server. I'm not |
| | sure if Sybase have ported the debug stuff to 10/11. Like most of |
| | their debug tools it started off quite strongly but was never |
| | developed. |
|------+----------------------------------------------------------------------|
| | Startup only engine 0; use dbcc engine("online") to incrementally |
| 1608 | bring up additional engines until the maximum number of configured |
| | engines. |
|------+----------------------------------------------------------------------|
| 1610 | Boot the SQL dataserver with TCP_NODELAY enabled. |
| * | |
|------+----------------------------------------------------------------------|
| 1611 | If possible, pin shared memory -- check errorlog for success/ |
| * | failure. |
|------+----------------------------------------------------------------------|
| 1613 | Set affinity of the SQL dataserver engine's onto particular CPUs -- |
| | usually pins engine 0 to processor 0, engine 1 to processor 1... |
|------+----------------------------------------------------------------------|
| 1615 | SGI only: turn on recoverability to filesystem devices. |
|------+----------------------------------------------------------------------|
| | Linux only: Revert to using cached filesystem I/O. By default, ASE |
| 1625 | on Linux (11.9.2 and above) opens filesystem devices using O_SYNC, |
| | unlike other Unix based releases, which means it is safe to use |
| | filesystems devices for production systems. |
|------+----------------------------------------------------------------------|
| 2512 | Prevent dbcc from checking syslogs. Useful when you are constantly |
| | getting spurious allocation errors. |
|------+----------------------------------------------------------------------|
| 3300 | Display each log record that is being processed during recovery. You |
| | may wish to redirect stdout because it can be a lot of information. |
|------+----------------------------------------------------------------------|
| 3500 | Disable checkpointing. |
|------+----------------------------------------------------------------------|
| 3502 | Track checkpointing of databases in errorlog. |
|------+----------------------------------------------------------------------|
| 3601 | Stack trace when error raised. |
|------+----------------------------------------------------------------------|
| 3604 | Send dbcc output to screen. |
|------+----------------------------------------------------------------------|
| 3605 | Send dbcc output to errorlog. |
|------+----------------------------------------------------------------------|
| 3607 | Do not recover any database, clear behaviour start up checkpoint |
| | process. |
|------+----------------------------------------------------------------------|
| 3608 | Recover master only. Do not clear tempdb or start up checkpoint |
| | process. |
|------+----------------------------------------------------------------------|
| 3609 | Recover all databases. Do not clear tempdb or start up checkpoint |
| | process. |
|------+----------------------------------------------------------------------|
| 3610 | Pre-System 10 behaviour: divide by zero to result in NULL instead of |
| | error - also see Q6.2.5. |
|------+----------------------------------------------------------------------|
| 3620 | Do not kill infected processes. |
|------+----------------------------------------------------------------------|
| 4001 | Very verbose logging of each login attempt to the errorlog. Includes |
| | tons of information. |
|------+----------------------------------------------------------------------|
| 4012 | Don't spawn chkptproc. |
|------+----------------------------------------------------------------------|
| 4013 | Place a record in the errorlog for each login to the dataserver. |
|------+----------------------------------------------------------------------|
| 4020 | Boot without recover. |
|------+----------------------------------------------------------------------|
| | Forces all I/O requests to go through engine 0. This removes the |
| 5101 | contention between processors but could create a bottleneck if |
| | engine 0 becomes busy with non-I/O tasks. For more information... |
| | 5101/5102. |
|------+----------------------------------------------------------------------|
| 5102 | Prevents engine 0 from running any non-affinitied tasks. For more |
| | information...5101/5102. |
|------+----------------------------------------------------------------------|
| 7103 | Disable table lock promotion for text columns. |
|------+----------------------------------------------------------------------|
| 8203 | Display statement and transaction locks on a deadlock error. |
|------+----------------------------------------------------------------------|
| * | Starting with System 11 these are sp_configure'able |
+-----------------------------------------------------------------------------+

Return to top

-------------------------------------------------------------------------------

1.3.4 Trace Flags -- 5101 and 5102

-------------------------------------------------------------------------------

5101

Normally, each engine issues and checks for its own Disk I/O on behalf of the
tasks it runs. In completely symmetric operating systems, this behavior
provides maximum I/O throughput for ASE. Some operating systems are not
completely symmetric in their Disk I/O routines. For these environments, the
server can be booted with the 5101 trace flag. While tasks still request disk I
/O from any engine, the actual request to/from the OS is performed by engine 0.
The performance benefit comes from the reduced or eliminated contention on the
locking mechanism inside the OS kernel. To enable I/O affinity to engine 0,
start ASE with the 5101 Trace Flag.

Your errorlog will indicate the use of this option with the message:

Disk I/O affinitied to engine: 0

This trace flag only provides performance gains for servers with 3 or more
dataserver engines configured and being significantly utilized.

Use of this trace flag with fully symmetric operating systems will degrade
performance!

5102

The 5102 trace flag prevents engine 0 from running any non-affinitied tasks.
Normally, this forces engine 0 to perform Network I/O only. Applications with
heavy result set requirements (either large results or many connections issuing
short, fast requests) may benefit. This effectively eliminates the normal
latency for engine 0 to complete running its user thread before it issues the
network I/O to the underlying network transport driver. If used in conjunction
with the 5101 trace flag, engine 0 would perform all Disk I/O and Network I/O.
For environments with heavy disk and network I/O, engine 0 could easily
saturate when only the 5101 flag is in use. This flag allows engine 0 to
concentrate on I/O by not allowing it to run user tasks. To force task affinity
off engine 0, start ASE with the 5102 Trace Flag.

Your errorlog will indicate the use of this option with the message:

I/O only enabled for engine: 0
-------------------------------------------------------------------------------

Warning: Not supported by Sybase. Provided here for your enjoyment.

Return to top

-------------------------------------------------------------------------------

1.3.5 What is cmaxpktsz good for?

-------------------------------------------------------------------------------

cmaxpktsz corresponds to the parameter "maximum network packet size" which you
can see through sp_configure. I recommend only updating this value through
sp_configure. If some of your applications send or receive large amounts of
data across the network, these applications can achieve significant performance
improvement by using larger packet sizes. Two examples are large bulk copy
operations and applications reading or writing large text or image values.
Generally, you want to keep the value of default network packet size small for
users performing short queries, and allow users who send or receive large
volumes of data to request larger packet sizes by setting the maximum network
packet size configuration variable.

caddnetmem corresponds to the parameter "additional netmem" which you can see
through sp_configure. Again, I recommend only updating this value through
sp_configure. "additional netmem" sets the maximum size of additional memory
that can be used for network packets that are larger than ASE's default packet
size. The default value for additional netmem is 0, which means that no extra
space has been allocated for large packets. See the discussion below, under
maximum network packet size, for information on setting this configuration
variable. Memory allocated with additional netmem is added to the memory
allocated by memory. It does not affect other ASE memory uses.

ASE guarantees that every user connection will be able to log in at the default
packet size. If you increase maximum network packet size and additional netmem
remains set to 0, clients cannot use packet sizes that are larger than the
default size: all allocated network memory will be reserved for users at the
default size. In this situation, users who request a large packet size when
they log in receive a warning message telling them that their application will
use the default size. To determine the value for additional netmem if your
applications use larger packet sizes:

* Estimate the number of simultaneous users who will request the large packet
sizes, and the sizes their applications will request.
* Multiply this sum by three, since each connection needs three buffers.
* Add 2% for overhead, rounded up to the next multiple of 512

Return to top

-------------------------------------------------------------------------------

1.3.6 Buildmaster Configuration Definitions

-------------------------------------------------------------------------------


Attention! Please notice, be very careful with these parameters. Use only
at your own risk. Be sure to have a copy of the original parameters. Be
sure to have a dump of all dbs (include master) handy.

Since the release of 11.x (and above), there is almost no need for
buildmaster to configure parameters. In fact, buildmaster has gone been
removed from ASE 12.5. This section is really kept for anyone out there
running old versions of ASE. I still see the odd post from people asking
about 4.9.2, so this is for you.

Anyone else who feels a need to use buildmaster should check sp_configure
and/or SERVERNAME.cfg to see if the configuration parameter is there before
using buildmaster.

YOU HAVE BEEN WARNED. See the .

-------------------------------------------------------------------------------

The following is a list of configuration parameters and their effect on the
ASE. Changes to these parameters can affect performance of the server. Sybase
does not recommend modifying these parameters without first discussing the
change with Sybase Tech Support. This list is provided for information only.

These are categorized into two kinds:

* Configurable through sp_configure and
* not configurable but can be changed through 'buildmaster -y<variable>=value
-d<dbdevice>'

Configurable variables:

crecinterval:

The recovery interval specified in minutes.

ccatalogupdates:

A flag to inform whether system catalogs can be updated or not.

cusrconnections:

This is the number of user connections allowed in SQL
Server. This value + 3 (one for checkpoint, network
and mirror handlers) make the number of pss configured
in the server.
-------------------------------------------------------------------------------

cfgpss:

Number of PSS configured in the server. This value will
always be 3 more than cusrconnections. The reason is we
need PSS for checkpoint, network and mirror handlers.

THIS IS NOT CONFIGURABLE.
-------------------------------------------------------------------------------

cmemsize:

The total memory configured for the Server in 2k
units. This is the memory the server will use for both
Server and Kernel Structures. For Stratus or any 4k
pagesize implementation of ASE, certain values
will change as appropriate.

cdbnum:

This is the number of databases that can be open in SQL
Server at any given time.

clocknum:

Variable that defines and controls the number of logical
locks configured in the system.

cdesnum:

This is the number of open objects that can be open at
a given point of time.

cpcacheprcnt:

This is the percentage of cache that should be used
for procedures to be cached in.

cfillfactor:

Fill factor for indexes.

ctimeslice:

This value is in units of milli-seconds. This value determines
how much time a task is allowed to run before it yields.
This value is internally converted to ticks. See below
the explanations for cclkrate, ctimemax etc.

ccrdatabasesize:

The default size of the database when it is created.
This value is Megabytes and the default is 2Meg.

ctappreten:

An outdated not used variable.

crecoveryflags:

A toggle flag which will display certain recovery information
during database recoveries.

cserialno:

An informational variable that stores the serial number
of the product.

cnestedtriggers:

Flag that controls whether nested triggers allowed or not.

cnvdisks:

Variable that controls the number of device structures
that are allocated which affects the number of devices
that can be opened during server boot up. If user
defined 20 devices and this value is configured to be
10, during recovery only 10 devices will be opened and
the rest will get errors.
cfgsitebuf:
This variable controls maximum number of site handler
structures that will be allocated. This in turn
controls the number of site handlers that can be
active at a given instance.
cfgrembufs:
This variable controls the number of remote buffers
that needs to send and receive from remote sites.
Actually this value should be set to number of
logical connections configured. (See below)
cfglogconn:
This is the number of logical connections that can
be open at any instance. This value controls
the number of resource structure allocated and
hence it will affect the overall logical connection
combined with different sites. THIS IS NOT PER SITE.

cfgdatabuf:

Maximum number of pre-read packets per logical connections.
If logical connection is set to 10, and cfgdatabuf is set
to 3 then the number of resources allocated will be
30.

cfupgradeversion:

Version number of last upgrade program ran on this server.

csortord:

Sort order of ASE.

cold_sortdord:

When sort orders are changed the old sort order is
saved in this variable to be used during recovery
of the database after the Server is rebooted with
the sort order change.

ccharset:

Character Set used by ASE

cold_charset:

Same as cold_sortord except it stores the previous
Character Set.
-------------------------------------------------------------------------------

cdflt_sortord:

page # of sort order image definition. This should
not be changed at any point. This is a server only
variable.

cdflt_charset:

page # of character set image definition. This should
not be changed at any point. This is a server only
variable.

cold_dflt_sortord:

page # of previous sort order image definition. This
should not be changed at any point. This is a server
only variable.

cold_dflt_charset:

page # of previous chracter set image definition. This
should not be changed at any point. This is a server
only variable.
-------------------------------------------------------------------------------

cdeflang:

Default language used by ASE.

cmaxonline:

Maximum number of engines that can be made online. This
number should not be more than the # of cpus available on this
system. On Single CPU system like RS6000 this value is always
1.

cminonline:

Minimum number of engines that should be online. This is 1 by
default.

cengadjinterval:

A noop variable at this time.

cfgstacksz:

Stack size per task configured. This doesn't include the guard
area of the stack space. The guard area can be altered through
cguardsz.
-------------------------------------------------------------------------------

cguardsz:

This is the size of the guard area. ASE will
allocate stack space for each task by adding cfgstacksz
(configurable through sp_configure) and cguardsz (default is
2K). This has to be a multiple of PAGESIZE which will be 2k
or 4k depending on the implementation.

behaviour:

Size of fixed stack space allocated per task including the
guard area.
-------------------------------------------------------------------------------

Non-configurable values :

-------------------------------------------------------------------------------

TIMESLICE, CTIMEMAX ETC:

-------------------------------------------------------------------------------

1 millisecond = 1/1000th of a second.
1 microsecond = 1/1000000th of a second. "Tick" : Interval between two clock
interrupts occur in real time.

"cclkrate" :

A value specified in microsecond units.
Normally on systems where a fine grained timer is not available
or if the Operating System cannot set sub-second alarms, this
value is set to 1000000 milliseconds which is 1 second. In
other words an alarm will go off every 1 second or you will
get 1 tick per second.

On Sun4 this is set to 100000 milliseconds which will result in
an interrupt going at 1/10th of a second. You will get 6 ticks
per second.

"avetimeslice" :

A value specified in millisecond units.
This is the value given in "sp_configure",<timeslice value>.
Otherwise the milliseconds are converted to milliseconds and
finally to tick values.

ticks = <avetimeslice> * 1000 / cclkrate.

"timeslice" :

-------------------------------------------------------------------------------
The unit of this variable is in ticks.
This value is derived from "avetimeslice". If "avetimeslice"
is less than 1000 milliseconds then timeslice is set to 1 tick.

"ctimemax" :

The unit of this variable is in ticks.

A task is considered in infinite loop if the consumed ticks
for a particular task is greater than ctimemax value. This
is when you get timeslice -201 or -1501 errors.

"cschedspins" :

For more information see Q1.3.2.

This value alters the behavior of ASE scheduler.
The scheduler will either run a qualified task or look
for I/O completion or sleep for a while before it can
do anything useful.

The cschedspins value determines how often the scheduler
will sleep and not how long it will sleep. A low value
will be suited for a I/O bound ASE but a
high value will be suited for CPU bound ASE. Since
ASE will be used in a mixed mode, this value
need to be fined tuned.

Based on practical behavior in the field, a single engine
ASE should have cschedspins set to 1 and a multi-engine
server should have set to 2000.

Now that we've defined the units of these variables what happens when we change
cclkrate ?

Assume we have a cclkrate=100000.

A clock interrupt will occur every (100000/1000000) 1/10th milliseconds.
Assuming a task started with 1 tick which can go up to "ctimemax=1500" ticks
can potentially take 1/10us * (1500 + 1) ticks which will be 150 milliseconds
or approx. .15 milliseconds per task.

Now changing the cclkrate to 75000

A clock interrupt will occur every (75000/1000000) 1/7th milliseconds. Assuming
a task started with 1 tick which can go up to ctimemax=1500 ticks can
potentially take 1/7us * (1500 + 1) ticks which will be 112 milliseconds or
approx. .11 milliseconds per task.

Decreasing the cclkrate value will decrease the time spent on each task. If the
task could not voluntarily yield within the time, the scheduler will kill the
task.

UNDER NO CIRCUMSTANCES the cclkrate value should be changed. The default
ctimemax value should be set to 1500. This is an empirical value and this can
be changed under special circumstances and strictly under the guidance of DSE.

-------------------------------------------------------------------------------

cfgdbname:

Name of the master device is saved here. This is 64
bytes in length.

cfgpss:

This is a derived value from cusrconnections + 3.
See cusrconnections above.

cfgxdes:

This value defines the number of transactions that
can be done by a task at a given instance.
Changing this value to be more than 32 will have no
effect on the server.
cfgsdes:
This value defines the number of open tables per
task. This will be typically for a query. This
will be the number of tables specified in a query
including subqueries.

Sybase Advises not to change this value. There
will be significant change in the size of per user
resource in ASE.

cfgbuf:

This is a derived variable based on the total
memory configured and subtracting different resource
sizes for Databases, Objects, Locks and other
Kernel memories.

cfgdes:

This is same as cdesnum. Other values will have no effect on it.

cfgprocedure:

This is a derived value. Based on cpcacheprcnt variable.

cfglocks:

This is same as clocknum. Other values will have no effect on it.

cfgcprot:

This is variable that defines the number of cache protectors per
task. This is used internally by ASE.

Sybase advise not to modify this value as a default of 15 will
be more than sufficient.

cnproc:

This is a derived value based on cusrconnections + <extra> for
Sybase internal tasks that are both visible and non-visible.

cnmemmap:

This is an internal variable that will keep track of ASE
memory.

Modifying this value will not have any effect.

cnmbox:

Number of mail box structures that need to be allocated.
More used in VMS environment than UNIX environment.

cnmsg:

Used in tandem with cnmbox.

cnmsgmax:

Maximum number of messages that can be passed between mailboxes.

cnblkio:

Number of disk I/O request (async and direct) that can be
processed at a given instance. This is a global value for all
the engines and not per engine value.

This value is directly depended on the number of I/O request
that can be processed by the Operating System. It varies
depending on the Operating System.

cnblkmax:

Maximum number of I/O request that can be processed at any given
time.

Normally cnblkio,cnblkmax and cnmaxaio_server should be the same.

cnmaxaio_engine:

Maximum number of I/O request that can be processed by one engine.
Since engines are Operating System Process, if there is any limit
imposed by the Operating System on a per process basis then
this value should be set. Otherwise it is a noop.

cnmaxaio_server:

This is the total number of I/O request ASE can do.
This value s directly depended on the number of I/O request
that can be processed by the Operating System. It varies
depending on the Operating System.

csiocnt:

not used.

cnbytio:

Similar to disk I/O request, this is for network I/O request.
This includes disk/tape dumps also. This value is for
the whole ASE including other engines.

cnbytmax:

Maximum number of network I/O request including disk/tape dumps.

cnalarm:

Maximum number of alarms including the alarms used by
the system. This is typically used when users do "waitfor delay"
commands.

cfgmastmirror:

Mirror device name for the master device.

cfgmastmirror_stat:

Status of mirror devices for the master device like serial/dynamic
mirroring etc.

cindextrips:

This value determines the ageing of a index buffer before it
is removed from the cache.

coamtrips:

This value determines the aging of a OAM buffer before it
is removed from the cache.

cpreallocext:

This value determines the number of extents that will be
allocated while doing BCP.

cbufwashsize:

This value determines when to flush buffers in the cache
that are modified.

Return to top

-------------------------------------------------------------------------------

1.3.7: What is CIS and how can I use it?

-------------------------------------------------------------------------------

CIS is the new name for Omni ASE. The biggest difference is that CIS is
included with Adaptive Server Enterprise as standard. Actually, this is not
completely accurate; the ability to connect to other ASEs and ASEs, including
Microsoft's, is included as standard. If you need to connect to DB2 or Oracle
you have to obtain an additional licence.

So, what is it?

CIS is a means of connecting two servers together so that seamless cross-server
joins can be executed. It is not just restricted to selects, pretty much any
operation that can be performed on a local table can also be performed on a
remote table. This includes dropping it, so be careful!

What servers can I connect to?

* Sybase ASE
* Microsoft SQL Server
* IBM DB2
* Oracle

What are the catches?

Well, nothing truly comes for free. CIS is not a means of providing true load
sharing, although you will find nothing explicitly in the documentation to tell
you this. Obviously there is a performance hit which seems to affect cursors
worst of all. CIS itself is implemented using cursors and this may be part of
the explanation.

OK, so how do I use it?

Easy! Add the remote server using sp_addserver. Make sure that you define it
as type sql_server or ASEnterprise. Create an "existing" table using the
definition of the remote table. Update statistics on this new "existing"
table. Then simply use it in joins exactly as if it were a local table.

Return to top

-------------------------------------------------------------------------------

1.3.8: If the master device is full, how do I make the master database bigger?

-------------------------------------------------------------------------------

It is not possible to extend the master database across another device, so the
following from Eric McGrane (recently of Sybase Product Support Engineering)
should help.

* dump the current master database
* Pre-12.5 users use buildmaster to create a new master device with a larger
size. ASE 12.5 users use dataserver to build the new, larger, master
database.
* start the server in single user mode using the new master device
* login to the server and execute the following tsql:


select * from sysdevices


* take note of the high value
* load the dump of the master you had just taken
* restart the server (as it will be shut down when master is done loading),
again
in single user mode so that you can update system tables
* login to the server and update sysdevices setting high for master to the
value
that you noted previously
* shut the server down and start it back up, but this time not in single user
mode.

The end result of the above is that you will now have a larger master device
and you can alter your master database to be a larger size. For details about
starting the server in single user mode and how to use buildmaster (if you need
the details) please refer to the documentation.

Return to top

-------------------------------------------------------------------------------

1.3.9: How do I run multiple versions of Sybase on the same server?

-------------------------------------------------------------------------------

The answer to this relies somewhat on the platform that you are using.

Unix

ASE Versions Before 12.0

This applies to Unix and variants, Linux included. Install the various releases
of software into logical places within your filesystem. I like to store all
application software below a single directory for ease of maintenance, choose
something like /sw. I know that some are keen on /opt and others /usr/local. It
is all down to preference and server usage. If you have both Oracle and Sybase
on the same server you might want /sw/sybase or /opt/sybase. Be a little
careful here if your platform is Linux or FreeBSD. The standard installation
directories for Sybase on those platforms is /opt/sybase. Finally, have a
directory for the release, say ASE11_9_2 or simply 11.9.2 if you only ever have
Sybase ASE running on this server. A little imagination is called for!

So, now you have a directory such as /sw/sybase/ASE/11.9.2 (my preferred choice
:-), and some software installed under the directories, what now? In the most
minimal form, that is all you need. Non of the environment variables are
essential. You could quite successfully run

/sw/sybase/ASE/11.9.2/bin/isql -Usa -SMYSERV -I/sw/sybase/ASE/11.9.2/interfaces

and get to the server, but that is a lot of typing. By setting the SYBASE
environment variable to /sw/sybase/ASE/11.9.2 you never need tell isql or other
apps where to find the interfaces. Then, you can set the path with a cool

PATH=$SYBASE/bin:$PATH

to pick up the correct set of Sybase binaries. That reduces the previous mass
of typing to

isql -Usa -SMYSERV

which is much more manageable.

You can create yourself a couple of shell scripts to do the changes for you. So
if the script a11.9 contained:

SYBASE=/sw/sybase/ASE/11.9.2
PATH=$SYBASE/bin:$SYBASE

# Remember to export the variables!
EXPORT PATH SYBASE

and a11.0 contained:

SYBASE=/sw/sybase/ASE/11.0.3.3
PATH=$SYBASE/bin:$SYBASE

# Remember to export the variables!
EXPORT PATH SYBASE

you would toggle between being connect to and 11.9.2 server and a 12.0 server,
depending upon which one you executed last. The scripts are not at all
sophisticated, you could quite easily have one script and pass a version string
into it. You will notice that the PATH variable gets longer each time the
script is executed. You could add greps to see if there was already a Sybase
instance on the path. Have I mentioned imagination?

ASE 12.0 and Beyond

Sybase dramatically changed the structure of the installation directory tree
with ASE 12. You still have a SYBASE environment variable pointing to the
route, but now the various packages fit below that directory. So, if we take /
sw/sybase as the root directory, we have the following (the following is for a
12.5 installation, but all versions follow the same format):

/sw/sybase/ASE-12_5
/OCS-12_5

Below ASE-12_5 is most of the stuff that we have come to expect under $SYBASE,
the install, bin and scripts directories. This is also where the SERVER.cfg
file has moved to. (Note the the interfaces file is still in $SYBASE.) The bin
directory on this side includes the dataserver, diagserver and srvbuild
binaries.

The OCS-12_5 is the open client software directory. It means that Sybase can
update the client software without unduly affecting the server. isql, bcp and
other clients are to be found here.

It does take a little getting used to if you have been using the pre-12 style
for a number of years. However, in its defence, it is much more logical, even
if it about triples the length of your PATH variable!

That is another good part of the new installation. Sybase actually provides you
with the shell script to do all of this. There is a file in /sw/sybase called
SYBASE.sh (there is an equivalent C shell version in the same place) that sets
everything you need!

Interfaces File

The only real addition to all of the above is an easier way to manage the
interfaces file. As mentioned before, ASE based apps look for the interfaces
file in $SYBASE/interfaces by default. Unix is nice in that it allows you to
have symbolic links that make it appear as if a file is somewhere that it
isn't. Place the real interfaces file somewhere independent of the software
trees. /sw/sybase/ASE/interfaces might be a sound logical choice. Now, cd to
$SYBASE and issue

ln -s /sw/sybase/ASE/interfaces

and the interfaces will appear to exist in the $SYBASE directory, but will in
fact remain in its own home.

Note: make sure that interfaces file is copied to its own home before removing
it from $SYBASE.

Now you can put symbolic links in each and every software installation and only
have to worry about maintaining the server list, on that server, in one place.
Having the interfaces file common to many physical servers is trickier, but not
impossible. Personally I would choose to put it in a central CVS repository and
use that to keep each server reasonably up-to-date.

NT/2000

Firstly, I have tried the following on W2K and it all works OK. I have read a
number of reports of people having difficulty getting clean installs under NT.
11.5 and 12.0 mainly. I cannot remeber having a problem with either of those
myself, but I only ever installed it to test that stuff I write runs on all
platforms. I have no intention of upgrading to XP until MS pays me to do it. It
looks like a cheap plastic version of an operating system and I pity anyone
that is forced to use it.

NT is tougher than UNIX to run multiple instances on, mainly due to the fact
that it wants to do stuff for you in the background, namely configure
environment variables. The following worked for me with the following versions
of Sybase ASE all installed and running on a single server: 11.5.1, 11.9.2,
12.5. I don't have a version of ASE 12.0 for NT. If I can persuade Sybase to
send them it to me, I might be able to get that running too. Notably, each and
every one of the databases runs as a service!!!

1. Start by installing each software release into its own area. Make sure that
it is a local disk. (See Q2.2.3.) I chose to install ASE 12.5 into C:\
Sybase12_5 and ASE 11.9.2 into C:\Sybase11_9_2 etc. When it asks you about
configuring the server, select "no" or "cancel".
2. Add a user for each installation that you are going to run. Again, I added
a user sybase12_5 for ASE 12.5 and sybase11_9_2 for ASE 11.9.2.
3. As a system account, edit the environment variables (On W2K this is
Settings->Control Panel->System->Advanced->Environment Variables...) and
remove any reference to Sybase from the system path. Make sure that you
store away what has been set. A text file on your C drive is a good idea at
this stage.
4. Similarly, remove references to Sybase from the Lib, Include and CLASSPATH
variables, storing the strings away.
5. Remove the SYBASE, DSEDIT and DSQUERY variable.
6. As I said before, I do not own 12.0, so I cannot tell you what to do about
the new Sybase variables SYBASE_OCS, SYBASE_ASE, SYBASE_FTS, SYBASE_JRE
etc. I can only assume that you need to cut them out too. If you are
installing pre-12 with only 1 of 12 or 12.5, then it is not necessary.
7. Login as each new Sybase user in turn and add to each of these a set of
local variables corresponding to path, Include, Lib and set them to be the
appropriate parts from the strings you removed from the system versions
above. So, if you installed ASE 12.5 in the method described, you will have
a whole series of variables with settings containing "C:\Sybase_12_5", add
all of these to local variables belonging to the user sybase12_5. Repeat
for each instance of ASE installed. This is a tedious process and I don't
know a way of speading it up. It may be possible to edit the registry, but
I was not happy doing that.
8. If you have made each of the Sybase users administrators, then you can
configure the software from that account, and install a new ASE server.
Remember that each one needs its own port. 11.5.1 and 11.9.2 did not give
me an option to change the port during the install, so I had to do that
afterwards by editing the SQL.INI for each server in its own installation
tree.
9. If you are not able to make each user and administrator, you will need to
work with an admin to configure the software. (ASE requires administrative
rights in order to be able to add the service entries.) You will need to
log in as this admin account, set the path to the appropriate value for
each installation, install the software and then set the path to the new
values, install the next ASE etc. On NT for sure you will have to log out
and log in after changing the path variable. 2000 may be less brain dead.
Just be thankful you are not having to reboot!
10. Log back in as your tame administrator account and go into the control
panel. You need to start the "Services" applet. This is either there if you
are running NT or you have to go into "Administrative Tools" for 2000.
Scroll down and select the first of the services, which should be of the
form

"Sybase SQLServer _MYSERVER".

Right click and select "Properties" (I think this is how it was for NT, but
you want that services properties, however you get there.) In 2000 there is
a "Log On" tab. NT has a button (I think) that serves the same purpose.
Whether tab or button, click on it. You should have a panel that starts, at
the top, with "Log on as" and a a pair of radio options. The top one will
probably be selected, "Local System account". Choose the other and enter
the details for the sybase account associated with this server. So if the
server is ASE 12.5 enter "sybase12_5" for "This account" and enter the
password associated with this account in the next two boxes. Select enough
"OK"s to take you out of the service properties editor.
11. None of the installations made a good job of the services part. All of them
added services for all of the standard servers (data, backup, monitor and
XP), even though I had not configured any but XP server. (The NT
installation is of a different form to the UNIX/Linux versions.) The 12.5
XP configuration was OK, but the pre-12 ones were not. You will have to go
in and manually set the user to connect as (as described earlier). If you
do not do this, the services will not start properly.
12. You should then be able to start any or all of the services by pressing the
"play" button.
13. Finally, you need to re-edit the local copies of the path, Include and Lib
variables for your tame admin account if you use that account to connect to
Sybase.

It worked for me, as I said. I was able to run all 3 services simultaneously
and connect from the local and external machines. There is no trick as neat as
the symbolic link on Unix. Links under NT work differently.

Return to top

-------------------------------------------------------------------------------

1.3.10: How do I capture a process's SQL?

-------------------------------------------------------------------------------

This is a bit of a wide question, and there are many answers to it. Primarily,
it depends on why you are trying to capture it. If you are trying to debug a
troublesome stored procedure that is behaving differently in production to how
it did in testing, then you might look at the DBCC method. Alternatively, if
you wanted to do some longer term profiling, then auditing or one of the third
party tools might be the way forward. If you know of methods that are not
included here, please let me know.

DBCCs

If you want to look at the SQL a particular process is running at the moment,
one of the following should work. Not sure which versions of ASE these work
with. Remember to issue dbcc traceon(3604) before running any of the dbcc's so
that you can see the output at your terminal.

* dbcc sqltext(spid)
* dbcc pss(0, spid, 0)

The first of the commands issues the SQL of the spid only a bit like this:

[27] BISCAY.master.1> dbcc sqltext(9)
[27] BISCAY.master.2> go
SQL Text: select spid, status, suser_name(suid), hostname,
db_name(dbid), cmd, cpu, physical_io, memusage,
convert(char(5),blocked) from master..sysprocesses
DBCC execution completed. If DBCC printed error messages, contact a user with
System Administrator (SA) role.
[28] BISCAY.master.1>

The second issues an awful lot of other stuff before printing the text at the
bottom. Mercifully, this means that you don't have to scroll up to search for
the SQL text, which is in much the same format as with dbcc sqltext.

There are a number of third party tools that will execute these commands from a
list of processes. One of the problems is that you do have to be 'sa' or have
'sa_role' in order to run them.

Certainly the first, and possibly both, have one major drawback, and that is
that they are limited to displaying about 400 bytes worth of text, which can be
a bit annoying. However, if what you are trying to do is catch a piece of rogue
SQL that is causing a table scan or some other dastardly trick, a unique
comment in the early part of the query will lead to its easy identification.

Monitor Server

Since ASE 11.5, monitor server has had the capability for capturing a processes
SQL. See Q1.6.2 for how to configure a Monitor Server Client. When you are
done, you can get see the SQL text from a process using the "Process Current
SQL Statement" monitor. The output looks like this.

Auditing

The second way of wanting to do this is for a number of processes for a period
of time. There are several methods of doing this. Probably the most popular is
to use auditing, and it is almost certainly the most popular because it
requires no additional software purchases.

Auditing is a very powerful tool that can collect information on just about
everything that happens on the server. It can be configured to capture
'cmdtext' for any or all users on a system. The data will be loaded into the
sysaudits database for later perusal. The SQL captured is not limited to a
number of bytes, like the previous examples, but if it is more than 255 bytes
long, then it will span several audit records, which must be put back together
to see the whole picture. To be honest, I am not sure what happens now that
varchars can be greater than 255 bytes in length. Personal experience with
auditing leaves to think that the load on the server is up to about 3%,
depending on the number of engines you have (the more engines, the more of a
load auditing is) and, obviously, the number of processes you wish to monitor.
I calculated 3% based on auditing all of 400 users, each of which had 2
connections to the server, on a server with 7 engines.

Ribo

Another option for capturing the SQL text is to use the free Ribo utility that
is provided with as part of ASE these days. This is a small server written in
Java as an example of what can be done using jConnect. This utility is nice in
that it does not place any load on the ASE server. However, it probably has an
effect on the client that is using it. This utility's other draw back is that
each client that you wish to monitor via Ribo must be directly configured to
use it. It is not possibly mid-session to just magically turn it on.

The way it works is to act as an intermediary between the ASE server and the
client wishing to connect. All is SQL is passed through and executed exactly as
if the client was directly connected, and the results passed back. What the
Ribo server does is enable you to save the inbound SQL to a file.

3rd Party Tools

Again, there are a number of third party tools that do this job as well,
OpenSwitch being one of them. There are also a number of third party tools that
do a better job than this. They do not have any impact on the client or the
server. They work by sniffing the network for relevant packets and then put
them pack together. In actuality, they do a lot more than just generate the
SQL, but they are capable of that.

Return to top

-------------------------------------------------------------------------------

General Troubleshooting User Database Administration ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:11 AM4/20/04
to
Archive-name: databases/sybase-faq/part12

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

SQL Fundamentals

6.1.1 Are there alternatives to row at a time processing?
6.1.2 When should I execute an sp_recompile?
6.1.3 What are the different types of locks and what do they mean?
6.1.4 What's the purpose of using holdlock?
6.1.5 What's the difference between an update in place versus a deferred
update? - see Q1.5.9
6.1.6 How do I find the oldest open transaction?
6.1.7 How do I check if log truncation is blocked?
6.1.8 The timestamp datatype
6.1.9 Stored Procedure Recompilation and Reresolution
6.1.10 How do I manipulate binary columns?
6.1.11 How do I remove duplicate rows from a table?

SQL Advanced bcp ASE FAQ

-------------------------------------------------------------------------------

6.1.1: Alternative to row at a time processing

-------------------------------------------------------------------------------

Someone asked how they could speed up their processing. They were batch
updating/inserting gobs of information. Their algorithm was something as
follows:

... In another case I do:

If exists (select record) then
update record
else
insert record

I'm not sure which way is faster or if it makes a difference. I am doing
this for as many as 4000 records at a time (calling a stored procedure 4000
times!). I am interesting in knowing any way to improve this. The parameter
translation alone on the procedure calls takes 40 seconds for 4000 records.
I am using exec in DB-Lib.

Would RPC or CT-Lib be better/faster?

A netter responded stating that it was faster to ditch their algorithm and to
apply a set based strategy:

The way to take your approach is to convert the row at a time processing
(which is more traditional type of thinking) into a batch at a time (which
is more relational type of thinking). Now I'm not trying to insult you to
say that you suck or anything like that, we just need to dial you in to
think in relational terms.

The idea is to do batches (or bundles) of rows rather than processing a
single one at a time.

So let's take your example (since you didn't give exact values [probably
out of kindness to save my eyeballs] I'll use your generic example to
extend what I'm talking about):

Before:

if exists (select record) then
update record
else
insert record

New way:
1. Load all your rows into a table named new_stuff in a separate work
database (call it work_db) and load it using bcp -- no third GL needed.
1. truncate new_stuff and drop all indexes
2. sort your data using UNIX sort and sort it by the clustered columns
3. load it using bcp
4. create clustered index using with sorted_data and any ancillary
non-clustered index.
2. Assuming that your target table is called old_stuff
3. Do the update in a single batch:
begin tran

/* delete any rows in old_stuff which would normally
** would have been updated... we'll insert 'em instead!
** Essentially, treat the update as a delete/insert.
*/

delete old_stuff
from old_stuff,
new_stuff
where old_stuff.key = new_stuff.key

/* insert entire new table: this adds any rows
** that would have been updated before and
** inserts the new rows
*/
insert old_stuff
select * from new_stuff

commit tran


You can do all this without writing 3-GL, using bcp and a shell script.

A word of caution:

Since these inserts/updates are batched orientated you may blow your
log if you attempt to do too many at a time. In order to avoid this use
the set rowcount directive to create bite-size chunks.

Back to top

-------------------------------------------------------------------------------

6.1.2: When should I execute an sp_recompile?

-------------------------------------------------------------------------------

An sp_recompile should be issued any time a new index is added or an update
statistics. Dropping an index will cause an automatic recompile of all objects
that are dependent on the table.

The sp_recompile command simply increments the schemacnt counter for the given
table. All dependent object counter's are checked against this counter and if
they are different the SQL Server recompiles the object.

Back to top

-------------------------------------------------------------------------------

6.1.3: What are the different types of (All Page) locks?

-------------------------------------------------------------------------------

First off, just to get it out of the way, Sybase does now support row level
locking! (See Q6.1.11 for a description of the new features.) OK, that said and
sone, if you think you need row level locking, you probably aren't thinking set
based -- see Q6.1.1 for set processing.

The SQL Server uses locking in order to ensure that sanity of your queries.
Without locking there is no way to ensure the integrity of your operation.
Imagine a transaction that debited one account and credited another. If the
transaction didn't lock out readers/writers then someone can potentially see
erroneous data.

Essentially, the SQL Server attempts to use the least intrusive lock possible,
page lock, to satisfy a request. If it reaches around 200 page locks, then it
escalates the lock to a table lock and releases all page locks thus performing
the task more efficiently.

There are three types of locks:

* page locks
* table locks
* demand locks

Page Locks

There are three types of page locks:

* shared
* exclusive
* update

shared

These locks are requested and used by readers of information. More than one
connection can hold a shared lock on a data page.

This allows for multiple readers.

exclusive

The SQL Server uses exclusive locks when data is to be modified. Only one
connection may have an exclusive lock on a given data page. If a table is large
enough and the data is spread sufficiently, more than one connection may update
different data pages of a given table simultaneously.

update

A update lock is placed during a delete or an update while the SQL Server is
hunting for the pages to be altered. While an update lock is in place, there
can be shared locks thus allowing for higher throughput.

The update lock(s) are promoted to exclusive locks once the SQL Server is ready
to perform the delete/update.

Table Locks

There are three types of table locks:

* intent
* shared
* exclusive

intent

Intent locks indicate the intention to acquire a shared or exclusive lock on a
data page. Intent locks are used to prevent other transactions from acquiring
shared or exclusive locks on the given page.

shared

This is similar to a page level shared lock but it affects the entire table.
This lock is typically applied during the creation of a non-clustered index.

exclusive

This is similar to a page level exclusive lock but it affects the entire table.
If an update or delete affects the entire table, an exclusive table lock is
generated. Also, during the creation of a clustered index an exclusive lock is
generated.

Demand Locks

A demand lock prevents further shared locks from being set. The SQL Server sets
a demand lock to indicate that a transaction is next to lock a table or a page.

This avoids indefinite postponement if there was a flurry of readers when a
writer wished to make a change.

Back to top

-------------------------------------------------------------------------------

6.1.4: What's the purpose of using holdlock?

-------------------------------------------------------------------------------

All select/readtext statements acquire shared locks (see Q6.1.3) to retrieve
their information. After the information is retrieved, the shared lock(s) is/
are released.

The holdlock option is used within transactions so that after the select/
readtext statement the locks are held until the end of the transaction:

* commit transaction
* rollback transaction

If the holdlock is not used within a transaction, the shared locks are
released.

Example

Assume we have the following two transactions and that each where-clause
qualifies a single row:

tx #1

begin transaction
/* acquire a shared lock and hold it until we commit */
1: select col_1 from table_a holdlock where id=1
2: update table_b set col_3 = 'fiz' where id=12
commit transaction

tx #2

begin transaction
1: update table_a set col_2 = 'a' where id=1
2: update table_c set col_3 = 'teo' where id=45
commit transaction

If tx#1, line 1 executes prior to tx#2, line 1, tx#2 waits to acquire its
exclusive lock until tx#1 releases the shared level lock on the object. This
will not be done until the commit transaction, thus slowing user throughput.

On the other hand, if tx#1 had not used the holdlock attribute, tx#2 would not
have had to wait until tx#1 committed its transaction. This is because shared
level locks are released immediately (even within transactions) when the
holdlock attribute is not used.

Note that the holdlock attribute does not stop another transaction from
acquiring a shared level lock on the object (i.e. another reader). It only
stops an exclusive level lock (i.e. a writer) from being acquired.

Back to top

-------------------------------------------------------------------------------

6.1.6: How do I find the oldest open transaction?

-------------------------------------------------------------------------------
select h.spid, u.name, p.cmd, h.name, h.starttime,
p.hostname, p.hostprocess, p.program_name
from master..syslogshold h,
master..sysprocesses p,
master..sysusers u
where h.spid = p.spid
and p.suid = u.suid
and h.spid != 0 /* not replication truncation point */

Back to top

-------------------------------------------------------------------------------

6.1.7: How do I check if log truncation is blocked?

-------------------------------------------------------------------------------

System 11 and beyond:

select h.spid, convert(varchar(20), h.name), h.starttime
from master..syslogshold h,
sysindexes i
where h.dbid = db_id()
and h.spid != 0
and i.id = 8 /* syslogs */
and h.page in (i.first, i.first+1) /* first page of log = page of oldest xact */

Back to top

-------------------------------------------------------------------------------

6.1.8: The timestamp datatype

-------------------------------------------------------------------------------

The timestamp datatype is user-defined datatype supplied by Sybase, defined as:

varbinary(8) NULL

It has a special use when used to define a table column. A table may have at
most one column of type timestamp, and whenever a row containing a timestamp
column is inserted or updated the value in the timestamp column is
automatically updated. This much is covered in the documentation.

What isn't covered is what the values placed in timestamp columns actually
represent. It is a common misconception that timestamp values bear some
relation to calendar date and/or clock time. They don't - the datatype is
badly-named. SQL Server keeps a counter that is incremented for every write
operation - you can see its current value via the global variable @@DBTS
(though don't try and use this value to predict what will get inserted into a
timestamp column as every connection shares the same counter.)

The value is maintained between server startups and increases monotonically
over time (though again you cannot rely on it this behaviour). Eventually the
value will wrap, potentially causing huge problems, though you will be warned
before it does - see Sybase Technical News Volume 5, Number 1 (see Q10.3.1).
You cannot convert this value to a datetime value - it is simply an 8-byte
integer.

Note that the global timestamp value is used for recovery purposes in the
event of an RDMBS crash. As transactions are committed to the log each
transaction gets a unique timestamp value. The checkpoint process places a
marker in the log with its unique timestamp value. If the RDBMS crashes,
recovery is the process of looking for transactions that need to be rolled
forward and/or backward from the checkpoint event. If a transaction spans
across the checkpoint event and it never competed it too needs to be rolled
back.

Essentially, this describes the write-ahead log protocol described by C.J.
Date in An Introduction to Database Systems.

So what is it for? It was created in order to support the browse-mode functions
of DB-Library (and for recovery as mentioned above). This enables an
application to easily support optimistic locking (See Q1.5.4) by guaranteeing a
watch column in a row will change value if any other column in that row is
updated. The browse functions checked that the timestamp value was still the
same as when the column was read before attempting an update. This behaviour is
easy to replicate without necessarily using the actual client browse-mode
functions - just read the timestamp value along with other data retrieved to
the client, and compare the stored value with the current value prior to an
update.

Back to top

-------------------------------------------------------------------------------

6.1.9: Stored Procedure Recompilation and Reresolution

-------------------------------------------------------------------------------

When a stored procedure is created, the text is placed in syscomments and a
parse tree is placed in sysprocedures. At this stage there is no compiled query
plan.

A compiled query plan for the procedure only ever exists in memory (that is, in
the procedure cache) and is created under the following conditions:

1. A procedure is executed for the first time.
2. A procedure is executed by a second or subsequent user when the first plan
in cache is still in use.
3. The procedure cache is flushed by server restart or cache LRU flush
procedure.
4. The procedure is executed or created using the with recompile option.

If the objects the procedure refers to change in some way - indexes dropped,
table definition changed, etc - the procedure will be reresolved - which
updates sysprocedures with a modified tree. Before 10.x the tree grows and in
extreme cases the procedure can become too big to execute. This problem
disappears in Sybase System 11. This reresolution will always occur if the
stored procedure uses temporary tables (tables that start with "#").

There is apparently no way of telling if a procedure has been reresolved.

Traceflag 299 offers some relief, see Q1.3.3 for more information regarding
traceflags.

The Official Explanation -- Reresolution and Recompilation Explained

When stored procedures are created, an entry is made in sysprocedures that
contains the query tree for that procedure. This query tree is the resolution
of the procedure and the applicable objects referenced by it. The syscomments
table will contain the actual procedure text. No query plan is kept on disk.
Upon first execution, the query tree is used to create (compile) a query plan
(execution plan) which is stored in the procedure cache, a server memory
structure. Additional query plans will be created in cache upon subsequent
executions of the procedure whenever all existing cached plans are in use. If a
cached plan is available, it will be used.

Recompilation is the process of using the existing query tree from
sysprocedures to create (compile) a new plan in cache. Recompilation can be
triggered by any one of the following:

* First execution of a stored procedure,
* Subsequent executions of the procedure when all existing cached query plans
are in use,
* If the procedure is created with the recompile option, CREATE PROCEDURE
sproc WITH RECOMPILE
* If execution is performed with the recompile option, EXECUTE sproc WITH
RECOMPILE

Re-resolution is the process of updating the query tree in sysprocedures AND
recompiling the query plan in cache. Re-resolution only updates the query tree
by adding the new tree onto the existing sysprocedures entry. This process
causes the procedure to grow in size which will eventually cause an execution
error (Msg 703 - Memory request failed because more than 64 pages are required
to run the query in its present form. The query should be broken up into
shorter queries if possible). Execution of a procedure that has been flagged
for re-resolution will cause the re-resolution to occur. To reduce the size of
a procedure, it must be dropped which will remove the entries from
sysprocedures and syscomments. Then recreate the procedure.

Re-resolution can be triggered by various activities most of which are
controlled by SQL Server, not the procedure owner. One option is available for
the procedure owner to force re-resolution. The system procedure, sp_recompile,
updates the schema count in sysobjects for the table referenced. A DBA usually
will execute this procedure after creating new distribution pages by use of
update statistics. The next execution of procedures that reference the table
flagged by sp_recompile will have a new query tree and query plan created.
Automatic re-resolution is done by SQL Server in the following scenarios:

* Following a LOAD DATABASE on the database containing the procedure,
* After a table used by the procedure is dropped and recreated,
* Following a LOAD DATABASE of a database where a referenced table resides,
* After a database containing a referenced table is dropped and recreated,
* Whenever a rule or default is bound or unbound to a referenced table.

Forcing automatic compression of procedures in System 10 is done with trace
flag 241. System 11 should be doing automatic compression, though this is not
certain.

When are stored procedures compiled?

Stored procedures are in a database as rows in sysprocedures, in the form of
parse trees. They are later compiled into execution plans.

A stored procedures is compiled:

1. with the first EXECute, when the parse tree is read into cache
2. with every EXECute, if CREATE PROCEDURE included WITH RECOMPILE
3. with each EXECute specifying WITH RECOMPILE
4. if the plans in cache for the procedure are all in use by other processes
5. after a LOAD DATABASE, when all procedures in the database are recompiled
6. if a table referenced by the procedure can not be opened (using object id),
when recompilation is done using the table's name
7. after a schema change in any referenced table, including:
1. CREATE INDEX or DROP INDEX to add/delete an index
2. ALTER TABLE to add a new column
3. sp_bindefault or sp_unbindefault to add/delete a default
4. sp_bindrule or sp_unbindrule to add/delete a rule
8. after EXECute sp_recompile on a referenced table, which increments
sysobjects.schema and thus forces re-compilation

What causes re-resolution of a stored procedure?

When a stored procedure references an object that is modified after the
creation of the stored procedure, the stored procedure must be re-resolved.
Re-resolution is the process of verifying the location of referenced objects,
including the object id number. Re-resolution will occur under the following
circumstances:

1. One of the tables used by the stored procedure is dropped and re-created.
2. A rule or default is bound to one of the tables (or unbound).
3. The user runs sp_recompile on one of the tables.
4. The database the stored procedure belongs to is re-loaded.
5. The database that one of the stored procedure's tables is located in is
re-loaded.
6. The database that one of the stored procedure's tables is located in is
dropped and re-created.

What will cause the size of a stored procedure to grow?

Any of the following will result in a stored procedure to grow when it is
recompiled:

1. One of the tables used in the procedure is dropped and re-created.
2. A new rule or default is bound to one of the tables or the user runs
sp_recompile on one of the tables.
3. The database containing the stored procedure is re-loaded.

Other things causing a stored procedure to be re-compiled will not cause it to
grow. For example, dropping an index on one of the tables used in the procedure
or doing EXEC WITH RECOMPILE.

The difference is between simple recompilation and re-resolution. Re-resolution
happens when one of the tables changes in such a way that the query trees
stored in sysprocedures may be invalid. The datatypes, column offsets, object
ids or other parts of the tree may change. In this case, the server must
re-allocate some of the query tree nodes. The old nodes are not de-allocated
(there is no way to do this within a single procedure header), so the procedure
grows. In time, trying to execute the stored procedure will result in a 703
error about exceeding the 64 page limit for a query.

Back to top

-------------------------------------------------------------------------------

6.1.10: How do I manipulate varbinary columns?

-------------------------------------------------------------------------------

The question was posed - How do we manipulate varbinary columns, given that
some portion - like the 5th and 6th bit of the 3rd byte - of a (var)binary
column, needs to be updated? Here is one approach, provided by Bret Halford (
br...@sybase.com), using stored procedures to set or clear certain bits of a
certain byte of a field of a row with a given id:

drop table demo_table
drop procedure clear_bits
drop procedure set_bits
go
create table demo_table (id numeric(18,0) identity, binary_col
binary(20))
go
insert demo_table values (0xffffffffffffffffffffffffffffffffffffffff)
insert demo_table values (0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa)
insert demo_table values (0x0000000000000000000000000000000000000000)
go

create procedure clear_bits (
@id numeric(18,0), -- primary key of row to be changed
@bytenum tinyint, -- specifies which byte of binary_col to change
@mask binary(1) -- bits to be cleared are zeroed,
-- bits left alone are turned on
-- so 0xff = clear all, 0xfb = clear bit 3
)
as
update demo_table set binary_col =
substring(binary_col,1,@bytenum-1)+
convert(binary(1),
convert(tinyint,substring(binary_col,@bytenum,1)) &
convert(tinyint,@mask)
)+
substring(binary_col,@bytenum+1,20)
from demo_table
where id = @id
go

create procedure set_bits (
@id numeric(18,0), -- primary key of row to be changed
@bytenum tinyint, -- specifies which byte of binary_col to change
@mask binary(1)) -- bits to be set are turned on
-- bits left alone are zeroed
-- so 0xff = set all, 0xfb = set all but 3
)
as
update demo_table set binary_col =
substring(binary_col,1,@bytenum-1)+
convert(binary(1),
convert(tinyint,substring(binary_col,@bytenum, 1)) |
convert(tinyint,@mask)
)+
substring(binary_col,@bytenum+1,20)
from demo_table
where id = @id
go

select * from demo_table
-- clear bits 2,4,6,8 of byte 1 of row 1
exec clear_bits 1,1,0xAA

-- set bits 1-8 of byte 20 of row 3
exec set_bits 3,20,0xff

-- clear bits 1-8 of byte 4 of row 2
exec clear_bits 2,4,0xff

-- clear bit 3 of byte 5 of row 2
exec clear_bits 2,5,0x08
exec clear_bits 2,6,0x0f
exec set_bits 2,10,0xff
go

select * from demo_table
go

Back to top

-------------------------------------------------------------------------------

6.1.11: How do I remove duplicate rows from a table?

-------------------------------------------------------------------------------

There are a number of different ways to achieve this, depending on what you are
trying to achieve. Usually, you are trying to remove duplication of a certain
key due to changes in business rules or recognition of a business rule that was
not applied when the database was originally built.

Probably the quickest method is to build a copy of the original table:

select *
into temp_table
from base_table
where 1=0

Create a unique index on the columns that covers the duplicating rows with the
ignore_dup_key attribute. This may be more columns that the key for the table.

create unique index temp_idx
on temp_table(col1, col2, ..., colN)
with ignore_dup_key

Now, insert base_table into temp_table.

insert temp_table
select * from base_table

You probably want to ensure you have a very good backup of the base_table at
this point, coz your going to clear it out! You will also want to check to
ensure that the temp_table includes the rows you need. You also need to ensure
that there are no triggers on the base table (remember to keep a copy!) or RI
constraints. You probably do not want any of these to fire, or if they do, you
are aware of the implications.

Now you have a couple of choices. You can simply drop the original table and
rename the temp table to the same name as the base table. Alternatively,
truncate the table and insert from the temp_table into the original table. You
would need to do this last if you did need the RI to fire on the table etc. I
suspect that in most cases dropping and renaming will be the best option.

If you want to simply see the duplicates in a table, the following query will
help:

select key1, key2, ...
from base_table
group by key1, key2, key3, key4, ...
having count(*) > 1

Sybase will actually allow a "select *", but it is not guaranteed to work.

Back to top

-------------------------------------------------------------------------------

SQL Advanced bcp ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:10 AM4/20/04
to
Archive-name: databases/sybase-faq/part11

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Platform Specific Issues - Solaris

2.1.1 Should I run 32 or 64 bit ASE with Solaris?
2.1.2 What is Intimate Shared Memory or ISM?

Platform Specific Issues - NT Performance and Tuning ASE FAQ

-------------------------------------------------------------------------------

2.1.1: Should I run 32 or 64 bit ASE with Solaris?

-------------------------------------------------------------------------------

Sybase' first forray into 64-bit was with release 11.9.3. I do not know much
about that release, but I seem to remember that it was always lagging behind
its sister release of 11.9.2.

With ASE 12, Sybase have both 32-bit and 64-bit versions at the same release
level. This is a big improvement, since it cuts out some concern that was
prevelant with 11.9.3 as to why they were on different numbers. The releases
are supposed to be identical in terms of functionality, save the fact that the
64-bit version can address more memory.

So, why not just be done with it and have just the one version? Firstly, I
suppose that not everyone who can run Solaris has the capability to run the
64-bit version. There are still a lot of 32-bit Sparc chips around and a lot of
people use them. It is also possible to run 32-bit Solaris on a 64-bit machine.
In order to be able to run 64-bit Sybase you will have to be running 64-bit
Solaris.

If you have a 64-bit environment, you still need to choose between which Sybase
version to run. If you have more than 4G bytes of memory on your machine and
you would like Sybase to take advantage of it, then the 64-bit version is for
you. If not, then the word on the street, and from Sybase themselves, is that
in identical environments, the 32-bit version runs slightly faster. I have
heard a couple of explanations as to why this is so, but nothing that I find
100% convincing.

Back to top

-------------------------------------------------------------------------------

2.1.2: What is Intimate Shared Memory or ISM?

-------------------------------------------------------------------------------

Intimate Shared Memory or ISM is a specific feature of Sun Solaris. The feature
was developed so that when multiple processes (at OS level) try to access a
shared memory region, they do not use multiple TLBs (Transalation Lookaside
Buffers) at OS kernel level. This saves lot of kernel memory space.

I don't think that does a whole lot for Sybase, more for Oracle I suppose.
However, there is a side effect that is useful. If there is engough memory
available on the machine, typically Solaris will not swap out process memory
marked as ISM if it can possibly help it.

Swapping in Solaris is done in three phases, reserved, allocated and used.
Locking the shared memory has the advantage of increasing performance. Of
course, if there are lot's of processes on the machine and if new processes
starve for memory, there is a potential that ISM will get swapped.

For performance reasons, it is worth ensuring that Sybase can allocated its
shared memory segment using ISM. ASE tries by default to use ISM and will
display an error message during start up if this is not possible. It is
probably worth starting Sybase soon after a machine is rebooted to give it the
best possible chance of using ISM.

More details can be found on the Sunsolve web site. I don't have a URL, sorry.
I am not even sure if this is a public site or not.

Back to top

-------------------------------------------------------------------------------

Platform Specific Issues - NT Performance and Tuning ASE FAQ

Platform Specific Issues - NT/2000

2.2.1 How to Start ASE on Remote NT Servers
2.2.2 How to Configure More than 2G bytes of Memory for ASE on NT
2.2.3 Installation Issues

Platform Specific Issues - Linux Platform Specific Issues - Solaris ASE FAQ

-------------------------------------------------------------------------------

2.2.1: How to Start ASE on Remote NT Servers

-------------------------------------------------------------------------------

Currently, there is no method of starting ASE on a remote NT server using
Sybase Central. So how do you get ASE running on an NT server located in one
city when you are currently located in another. OK, OK, so flying there is an
option, but let's try to stay within the realms of practicality <g>.

One option is to buy a good telnet server and telnet onto the box and then
start it using the "RUN_<server>.BAT" file. This works, but depending on the
telnet server can be a little troublesome. NT does not have such a nice set of
commands as Unix, so there is no "startserver" to run the server in the
background. This means that the telnet window that you use to start the server
may have to stay open for the lifetime of the server. This means that the
health of ASE is now dependent upon two machines not crashing. As I say, your
mileage may vary, but I have certainly found this to be the case with at least
one telnet server.

Another option is to use SRVMGR.EXE from the Windows NT resource kit. Roughly
you issue

srvmgr \\SERVER-TO-BE-MANAGED

(obviously replacing SERVER-TO-BE-MANAGED with the name of the server you wish
to start ASE on!)

Select the "Services" option, and start ASE as if you were in the "Services"
applet on a local NT server.

Yet another option is to install PC Anywhere or VNC on both machines and use
one of these tools to remotely control the system. (VNC is a very good version
of PC Anywhere, except that the clients and servers run on NT, Unix, Linux; the
source code is available and it is free (in both senses of the word)!)

If anyone knows of any better methods, please let me know and I will add them
to this section. Thanks.

Back to top

-------------------------------------------------------------------------------

2.2.2: How to Configure More than 2G bytes of Memory for ASE on NT.

-------------------------------------------------------------------------------

The following was posted on news://forums.sybase.com/sybase.public.ase.nt ,
taken directly from a Sybase SPS case notes.

(I read recently that this is not needed, that Sybase does all of this for you
before it leaves the factory. If anyone knows the real answer, I would be
grateful for an update.)

If you are using NT server enterprise, or Windows 2000 Advanced Server, you may
be able to get up to 3gig:

Here is what you need to do in order to configure greater than 2GB memory for
ASE on NT:

Step 1: Make a backup copy of sqlsrvr.exe in the sybase bin directory

Step 2: Verify the current settings of sqlsrvr.exe using imagecfg.exe:

imagecfg sqlsrvr.exe
sqlsrvr.exe contains the following configuration information:
Subsystem Version of 4.0
Stack Reserve Size: 0x20000
Stack Commit Size: 0x4

Step 3: Use imagecfg to switch on large addressing using the -l (lowercase L)
switch:

imagecfg -l sqlsrvr.exe
sqlsrvr.exe contains the following configuration information:
Subsystem Version of 4.0
Stack Reserve Size: 0x20000
Stack Commit Size: 0x4

sqlsrvr.exe updated with the following configuration information:

Subsystem Version of 4.0
Image can handle large (>2GB) addresses
Stack Reserve Size: 0x20000
Stack Commit Size: 0x4

Step 4: verify ASE is able to start

Step 5: The NT machine must be booted with the /3GB flag and must have
sufficient paging file space (e.g., if you want ASE to access 3G of memory then
the paging file must be at least that size)

Step 6: increase total memory to say 2.2 gb (anything > 2gb)

Step 7: increase starting virtual memory address to 23662592 decimal (which is
1691000 hex) as shown:

sp_configure 'shared memory starting address', 23662592

Step 8: restart server

Step 9: test to connect a lot of users (more than 240)

Back to top

-------------------------------------------------------------------------------

2.2.3: Installation issues.

-------------------------------------------------------------------------------

This is a list of items to be aware of when installing ASE onto NT/2000.

* Make sure that you install onto a local drive. This might not affect all
versions of ASE on NT/2000, but I could not get the software to install and
run from a network drive with the 12.5 developer edition. Try as I might,
it kept failing without really telling me why. I aborted the installation,
installed onto one of the local drives, and it worked a charm. My only NT/
2000 machine is my laptop with only one drive, so I do not know if this is
any drive other than "C" or whether it is just network mounted drives. Will
be happy to take advice and corrections from Sybase or anyone that can tell
me what I was doing wrong.

Back to top

-------------------------------------------------------------------------------

Platform Specific Issues - Linux Platform Specific Issues - Solaris ASE FAQ

Platform Specific Issues - Linux

2.3.1 ASE on Linux FAQ

DBCCs Platform Specific Issues - NT ASE FAQ

-------------------------------------------------------------------------------

2.3.1: ASE on Linux FAQ

-------------------------------------------------------------------------------

There is an FAQ covering ASE on Linux at Michael Peppler's site.

http://www.mbay.net/~mpeppler/Linux-ASE-FAQ.html

It contains a fair bit of information about running Sybase ASE on Linux and if
you are interested in doing just that, then go read it. It certainly will
answer your question about why, after a new install, you can connect from the
server that ASE is installed on but no other client. (I am not going to tell
you here, you will have to go and read it :-)

Back to top

-------------------------------------------------------------------------------

DBCCs Platform Specific Issues - NT ASE FAQ

DBCC's

3.1 How do I set TS Role in order to run certain DBCCs...?
3.2 What are some of the hidden/trick DBCC commands?
3.3 Other sites with DBCC information.
3.4 Fixing a Munged Log

Performing any of the above may corrupt your ASE installation. Please do
not call Sybase Technical Support after screwing up ASE. Remember, always
take a dump of the master database and any other databases that are to be
affected.

isql Platform Specific Issues - Linux ASE FAQ Index

-------------------------------------------------------------------------------

3.1: How to set TS Role

-------------------------------------------------------------------------------

Some DBCC commands require that you set TS Role in order to run them. Here's
how to set it:

Login to Server as sa and perform the following:

sp_role "grant", sybase_ts_role, sa

go

set role "sybase_ts_role" on
go

Back to top

-------------------------------------------------------------------------------

3.2: DBCC Command Reference

-------------------------------------------------------------------------------

Here is the list of DBCC commands that have been sent into the FAQ. If you
know of any more or have more information, then please send it in to
do...@midsomer.org, this is, after all, a resource for us all.

As ASE develops, so some of the dbcc's change. I have pointed out major
changes from one release to another that I know about. However, a couple of
changes are so common that it will save a lot of space if I say it once. Where
there is an option to specify dbid or dbname, in previous releases only dbid
would be accepted.

+--------------------------------------------------------------------------------------------------------------+
| | | |Risk Level|
| DBCC Name | Argument List | Comments | / |
| | | |Supported?|
|------------------+-----------------------------------------------------+--------------------------+----------|
|allocdump |( dbid | dbname, page ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| |( { print_bufs | no_print }, bucket_limit ) |Format prior to ASE 11. | |
| |-----------------------------------------------------+--------------------------+----------|
|bhash | |Format prior to ASE 12. | |
| |-----------------------------------------------------+--------------------------+----------|
| |( cname [, clet_id [, { print_bufs | no_print |Format ASE 12 and later. | |
| |},bucket_limit]] ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| |( [ dbid ][, objid ][, nbufs ], printopt = {0 | 1 | |Format prior to ASE 11. | |
| |2},buftype) | | |
| |-----------------------------------------------------+--------------------------+----------|
| |[ (dbid | dbname [, objid | objname [, nbufs [, | | |
| |printopt = { 0 | 1 | 2 } |Format prior to ASE 12. | |
| |[, buftype = { kept | hashed | nothashed | ioerr} [, | | |
|buffer |cachename ] ] ] ] ] ) ] | | |
| |-----------------------------------------------------+--------------------------+----------|
| |[ (dbid | dbname [, objid | objname [, nbufs [, | | |
| |printopt = { 0 | 1 | 2 } | | |
| |[, buftype = { kept | hashed | nothashed | ioerr} [, |Format ASE 12 and later. | |
| |cachename [, cachelet_id ] | | |
| |] ] ] ] ] ) ] | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| |( startaddress, length ) |Format prior to ASE 12. | |
|bytes |-----------------------------------------------------+--------------------------+----------|
| |(startaddress, length [, showlist | STRUCT_NAME]) |Format ASE 12 and later. | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| | |Uninstall and Uncache | |
|cacheremove |(dbid|dbname, objid|objname) |descriptor for an object | |
| | |from cache | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|checkalloc |[( dbname [, fix | nofix ] ) ] | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|checkcatalog |[( dbname )] | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|checkdb |[( dbname [, skip_ncindex ] ) ] | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|checktable |( tablename | tabid [, skip_ncindex ] ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| | |Error can take one of the | |
| | |following values: | |
| | | | |
| | | * 1133 error | |
| | | demonstrates that a | |
| | | page we think is an | |
| | | oam is not | |
| | | * 2502 error shows | |
| | | multiple references to| |
| | | the same page | |
| | | * 2503 error shows a | |
| | | breakage in the page | |
| | | linkage | |
| | | * 2521 error shows that | |
| | | the page is referenced| |
| | | but is not allocated | |
| | | on the extent page | |
| | | * 2523 error shows that | |
| | | the page number in the| |
| | | page or catalog | |
| | | entries are | |
| | | out-of-range for the | |
| | | database | |
| | | * 2525 error shows that | |
| | | an extent objid/indid | |
| | | do not match what is | |
| | | on the page | |
| | | * 2529 error shows a | |
|corrupt |( tablename, indid, error ) | page number | |
| | | out-of-range for the | |
| | | database or a 605 | |
| | | style scenario | |
| | | * 2540 error occurs when| |
| | | a page is allocated on| |
| | | an extent but the page| |
| | | is not referenced in | |
| | | the page chain | |
| | | * 2546 error occurs when| |
| | | an extent is found for| |
| | | an object without an | |
| | | of its pages being | |
| | | referenced (a stranded| |
| | | extent) | |
| | | * 7939 error occurs when| |
| | | an allocation page | |
| | | which has extents for | |
| | | an object are not | |
| | | reflected on the OAM | |
| | | page | |
| | | * 7940 error occurs when| |
| | | the total counts in | |
| | | the OAM page differ | |
| | | from the actual count | |
| | | of pages in the chain | |
| | | * 7949 error is similar | |
| | | to a 7940 except that | |
| | | the counts are on an | |
| | | allocation page basis | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| | |cursor_level - level of | |
|cursorinfo |(cursor_level, cursor_name) |nesting. -1 is all nesting| |
| | |levels | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|dbinfo |( [ dbname ] ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|dbrepair |( dbid, option = { dropdb | fixindex | fixsysindex },| | |
| |table, indexid ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|dbrepair |( dbid, ltmignore) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|dbtable |( dbid ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|delete_row |( dbid, pageid, delete_by_row = { 1 | 0 }, rownum ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|des |( [ dbid ][, objid ] ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| | |eng func may be: | |
| | | | |
|engine |(eng_func) | * "online" | |
| | | * "offline", ["<engine | |
| | | number>"] | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|extentcheck |( dbid, objid, indexid, sort = {1|0} ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|extentdump |( dbid, page ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|extentzap |( dbid, objid, indexid, sort ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|findnotfullextents|( dbid, objid, indexid, sort = { 1 | 0 } ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|fix_al |( [ dbname ] ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|help |( dbcc_command ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|ind |( dbid, objid, printopt = { 0 | 1 | 2 } ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|indexalloc |(tablename|tabid, indid, [full | optimized | fast], | | |
| |[fix | nofix]) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|listoam |(dbid | dbname, tabid | tablename, indid) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|locateindexpgs |( dbid, objid, page, indexid, level ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|lock | |print out lock chains | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|log |( [dbid][,objid][,page][,row][,nrecords][,type= | | |
| |{-1..36}],printopt={0|1} ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|memusage | | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|netmemshow |( option = {1 | 2 | 3} ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|netmemusage | | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|newalloc |( dbname, option = { 1 | 2 | 3 } ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|page |( dbid, pagenum [, printopt={0|1|2} ][, cache={0|1} ]| | |
| |[, logical={1|0} ] ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|pglinkage |( dbid, start, number, printopt={0|1|2}, target, | | |
| |order={1|0} ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|pktmemshow |( option = {spid} ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|procbuf |( dbid, objid, nbufs, printopt = { 0 | 1 } ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|prtipage |( dbid, objid, indexid, indexpage ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|pss |( suid, spid, printopt = { 1 | 0 } ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|rebuildextents |( dbid, objid, indexid ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| | |careful as this will cause| |
|rebuild_log |( dbid, 1, 1) |large jumps in your | |
| | |timestamp values used by | |
| | |log recovery. | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|remap | |Only available prior to | |
| | |12. | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|resource | | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|setkeepalive |(# minutes) |for use on Novell with TCP| |
| | |/IP. | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| | |Not needed with more | |
| | |recent versions of ASE, | |
| | |use the supplied stored | |
| | |procs. On older versions | |
|settrunc |('ltm','ignore') |of ASE (pre-11?) this | |
| | |command may be useful for | |
| | |a dba who is dumping and | |
| | |loading a database that | |
| | |has replication set on for| |
| | |the original db. | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| | |Shows the sql that the | |
|sqltext |(spid) |spid is currently | |
| | |running. Blank if idle. | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|stacktrace |(spid) |Not Linux, yet :-) | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|show_bucket |( dbid, pageid, lookup_type ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|tab |( dbid, objid, printopt = { 0 | 1 | 2 } ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|tablealloc |(tablename|tabid, [full | optimized | fast],[fix | | | |
| |nofix]) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|traceoff |( tracenum [, tracenum ... ] ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|traceon |( tracenum [, tracenum ... ] ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| | |Used to switch on/off | |
| | |certain options. Some are| |
| | |supported and listed in | |
| | |the docs, others | |
| | |correspond to the | |
| | |buildmaster -yall name | |
| | |minus the c prefix. | |
| | | | |
| | |Supported: | |
| | | | |
| | | * ascinserts ('value' is| |
| | | again two values, 1|0 | |
| | | for on or off and the | |
| | | table name). | |
| | | * cpuaffinity | |
| | | ('value' in this case | |
|tune |( option, value ) | is two values, the | |
| | | starting cpu number | |
| | | and "on" or "off".) | |
| | | * maxwritedes | |
| | | | |
| | |Unsupported: | |
| | | | |
| | | * indextrips | |
| | | * oamtrips | |
| | | * datatrips | |
| | | * schedspins | |
| | | * bufwashsize | |
| | | * sortbufsize | |
| | | * sortpgcount | |
| | | * maxscheds | |
| | | * max_retries | |
| | | | |
| | | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
|undo |( dbid, pageno, rowno ) | | |
|------------------+-----------------------------------------------------+--------------------------+----------|
| |( dbid|dbname, type = {0|1}, display_opts = {0|1} [, |If sp_helpdb is returning | |
|usedextents |bypiece = {0|1}]) |negative free space, try: | |
| | |usedextents(dbid, 0, 1, 1)| |
+--------------------------------------------------------------------------------------------------------------+

Back to top

-------------------------------------------------------------------------------

3.3: Other Sites with DBCC information

-------------------------------------------------------------------------------

* http://user.icx.net/~huntley/dbccinfo.htm, Al Huntley's site contains a
comprehensive including discussion on some and example output.
* http://www.kaleidatech.com/dbcc1.htm, From KaleidaTech Associates, Inc. has
another fairly complete list.
* http://www.sypron.nl, as you would expect, Rob Verschoor has a list of
DBCC's in his ASE Quick Reference Supplement.

Back to top

-------------------------------------------------------------------------------

3.4: Fixing a Munged Log

-------------------------------------------------------------------------------


Sybase Technical Support states that this is extremely dangerous as it
"jacks up the value of the timestamp" which is used for recovery purposes.
This may cause potential database corruption if the system fails while the
timestamp rolls over.

In 4.9.2, you could only run the dbcc rebuild_log command once and after
that you would have to use bcp to rebuild the database

In System 10, you can run this command about 10 times.

In System 11 I (Pablo, previous editor) tried it about 20 times and no
problem.

1> use master
2> go
1> select count(*) from your_database..syslogs
2> go

-----------
some number

1> sp_configure "allow updates",1
2> go
1> reconfigure with override /* for system 10 and below only*/
2> go

1> begin tran
2> go

/* Save the following status to be used later... */
1> select saved_status=status from sysdatabases where name = "your_database"
2> go
1> update sysdatabases set status = -32768 where name = "your_database"
2> go
1> commit tran
2> go
1> shutdown
2> go

1> dbcc rebuild_log (your_database, 0, 0)
2> go
DB-LIBRARY error (severity 9):
Unexpected EOF from SQL Server.

1> dbcc rebuild_log (your_database, 1, 1)
2> go
DBCC execution completed. If DBCC printed error messages, see your System
Administrator.


1> use your_database
2> go
1> select count(*) from syslogs
2> go

-----------
1

1> begin tran
2> go
1> update sysdatabases set status = saved_status where name = "your_database"
2> go
(1 row affected)
1> commit tran
2> go
1> shutdown
2> go

Back to top

-------------------------------------------------------------------------------

isql Platform Specific Issues - Linux ASE FAQ Index

isql

4.1 How do I hide my password using isql?
4.2 How do I remove row affected and/or dashes when using isql?
4.3 How do I pipe the output of one isql to another?
4.4 What alternatives to isql exist?
4.5 How can I make isql secure?

bcp DBCCs ASE FAQ

-------------------------------------------------------------------------------

4.1: Hiding your password to isql

-------------------------------------------------------------------------------

Here are a menagerie (I've always wanted to use that word) of different methods
to hide your password. Pick and choose whichever fits your environment best:

Single ASE on host

Script #1

Assuming that you are using bourne shell sh(1) as your scripting language you
can put the password in a file and substitute the file where the password is
needed.

#!/bin/sh

# invoke say ISQL or something...
(cat $HOME/dba/password_file
cat << EOD
dbcc ...
go
EOD ) | $SYBASE/bin/isql -Usa -w1000

Script #2

#!/bin/sh
umask 077
cat <<-endOfCat | isql -Umyuserid -Smyserver
mypassword
use mydb
go
sp_who
go
endOfCat

Script #3

#!/bin/sh
umask 077
cat <<-endOfCat | isql -Umyuserid -Smyserver
`myScriptForGeneratingPasswords myServer`
use mydb
go
sp_who
go
endOfCat

Script #3


#!/bin/sh
umask 077
isql -Umyuserid -Smyserver <<-endOfIsql
mypassword
use mydb
go
sp_who
go
endOfIsql

Script #4


#!/bin/sh
umask 077
isql -Umyuserid -Smyserver <<-endOfIsql
`myScriptForGeneratingPasswords myServer`
use mydb
go
sp_who
go
endOfIsql

Script #5


#!/bin/sh
echo 'mypassword
use mydb
go
sp_who
go' | isql -Umyuserid -Smyserver

Script #6


#!/bin/sh
echo "`myScriptForGeneratingPasswords myServer`
use mydb
go
sp_who
go" | isql -Umyuserid -Smyserver

Script #7

#!/bin/sh
echo "Password :\c "
stty -echo
read PASSWD
stty echo

echo "$PASSWD
waitfor delay '0:1:00'
go
" | $SYBASE/bin/isql -Usa -S${DSQUERY}

Multiple ASEs on host

Again, assuming that you are using bourne shell as your scripting language, you
can do the following:

1. Create a global file. This file will contain passwords, generic functions,
master device for the respective DSQUERY.
2. In the actual scripts, source in the global file.

Global File

SYBASE=/usr/sybase

my_password()
{
case $1 in
SERVER_1) PASSWD="this";;
SERVER_2) PASSWD="is";;
SERVER_3) PASSWD="bogus;;
*) return 1;;
esac

return 0
}

Generic Script

#!/bin/sh -a

#
# Use "-a" for auto-export of variables
#

# "dot" the file - equivalent to csh() "source" command
. $HOME/dba/global_file

DSQUERY=$1

# Determine the password: sets PASSWD
my_password $DSQUERY
if [ $? -ne 0 ] ; then # error!
echo "<do some error catching>"
exit 1
fi

# invoke say ISQL or something...
echo "$PASSWD
dbcc ...
go" | $SYBASE/bin/isql -U sa -S $DSQUERY -w1000

Back to top

-------------------------------------------------------------------------------

4.2: How to remove row affected and dashes

-------------------------------------------------------------------------------

If you pipe the output of isql then you can use sed(1) to remove this
extraneous output:

echo "$PASSWD
sp_who
go" | isql -U sa -S MY_SERVER | sed -e '/affected/d'
-e '/---/d'

If you simply wish to eliminate the row affected line use the set nocount on
switch.

Back to top

-------------------------------------------------------------------------------

4.3: How do I pipe the output of one isql to another?

-------------------------------------------------------------------------------

The following example queries sysdatabases and takes each database name and
creates a string of the sort sp_helpdb dbname and sends the results to another
isql. This is accomplished using bourne shell sh(1) and sed(1) to strip
unwanted output (see Q4.2):

#!/bin/sh

PASSWD=yuk
DSQUERY=GNARLY_HAIRBALL

echo "$PASSWD print \"$PASSWD\"
go
select 'sp_helpdb ' + name + char(10) + 'go'
from sysdatabases
go" | isql -U sa -S $DSQUERY -w 1000 | \
sed -e '/affected/d' -e '/---/d' -e '/Password:/d' | \
isql -U sa -S $DSQUERY -w 1000

To help you understand this you may wish to comment out any series of pipes and
see what output is being generated.

Back to top

-------------------------------------------------------------------------------

4.4: Are there any alternatives to isql?

-------------------------------------------------------------------------------

sqsh

In my opinion, and that of quite a lot of others, this is the most useful
(direct) replacement for isql that exists. It combines the usefulness of a good
shell with database interaction. Looking for the ability to page the output of
a long command? Look no further. Need to search a result set using a regular
expression? This is the tool for you.

Like isql, sqsh is a command line tool. It supports all of the features and
switches of isql with myriad of its own. There is one feature that isql has the
sqsh does not, and that is the ability to read the password as the first line
of an input file. If you look at a lot of the examples above, the password is
piped in, sqsh does not support this with the latest release. I am not sure if
this is a deliberate feature or not.

A quick summary of its features:

1. command line editing;
2. command history;
3. ability to pipe to standard filters;
4. ability to redirect output to X window;
5. shell variables
6. background execution;

Like all good modern shells, sqsh supports command line editing. You need to
have the GNU readline library available on your machine, but that is now
becoming common. If you have the bash shell, you have it by default I believe.

Sqsh behaves very well if run in an X Windows environment. There is the direct
support by way of an output switch to go that sends the results to an X Window,
but it is much better than that. If you resize the screen sqsh also resizes its
internal width to take advantage of the new size, just like any well behave X
application. Doesn't sound like a lot, but when you want to see the results
from a query and understand the output easily, much better if the columns all
line up and don't wrap. With isql you would have to exit the program, run it
again with an adjust '-w' flag and rerun the query.

Enough said. You need to try it! You can grab it from the official SQSH website
http://www.sqsh.org.

There are a host of others that I have heard about, but can no longer get to.
Some are mentioned in various sites, mainly the sqsh site. If any of them are
important, still being maintained, are actively supported, and are available
somewhere, then let me know and I will update this list.

* dsql
* asql
* ctsql
* qisql

However, I suspect that provided we have sqsh, no other command line version is
needed!!

SQL Advantage

This was Sybase's second attempt at a true GUI based SQL editor. It was only
available for W86 platforms. Quite a lot of people liked it, it came free with
Sybase and did just about the minimum necessary for an SQL Editor. Sadly, I
cannot find my copy any more, since 12.5 for NT no longer has it. I have heard
several unofficial channels say that Sybase will let you have a copy if you
ask. I do not know since I have not asked.

Not having a copy, and having a bad memory, I cannot tell you all of its
features. I cannot remember syntax highlighting or anything fancy like that,
but that does not mean that it was not there. I know that there are some true
devotees and if one of you cares to send me some words, I will slap them in
here.

There was a GUI before SQL Advantage, but it is/was too dire to mention.

jisql

This is the latest release from Sybase for the desktop interactive shell. It
uses Java, but you probably guessed that from the name. It works fine and is a
little like SQL Advantage (which was a little like Data Work Bench, which was a
...), from what I remember of that tool. Correct me if I am wrong Anthony!!

The best thing about it is that it is available for all platforms that support
Java.

The worst thing about it, and this is not so much a fault of jisql as a fault
of Java in general, is that it is unable to use the interfaces file. I know
that Java is intended to be truly multi-platform and that your average
photocopier does not have access to environment variables, but how many
photocopiers run Sybase? In most installations I can find my way totally
painlessly from ASE server to ASE server, not worrying about ports etc. If you
start using jisql regularly you will soon know the port numbers, since it is
the only way that you can connect. Personally, until this is solved, I will not
use the bloody tool.

tsql

This is the command line client that comes with FreeTDS. It comes with the
FreeTDS client (http://www.freetds.org). It is a very simple client, but it
works.

ASSE

Developed by Manish I Shah to be a direct replacement for Data Workbench, but
in Java. It is still in alpha, I believe, at Sourceforge. Suffers the same pros
and cons as jisql simply because of its Java heritage.

wisqlite

This is similar to jisql in its functionality, but is written in Tcl/Tk. I am
not 100% sure of the status, but will update this paragraph when I am. Try Tom
Poindexter's site for a starting point.

ntquery

This is a very lightweight SQL Editor that is someway between Sybase's original
offering (whose name I have had cleaned from my brain using hypnosis) and SQL
Advantage. I am not sure who wrote it but it is free, runs on W86 platforms
only and is available from ftp://ftp.midsomer.org/pub/ntquery.zip

DWB

The father of them all. I am not sure if this is officially allowed to
circulate, but I know some people that still use it and like it. I am
petitioning Sybase to allow me to make it available. It is only available for
Sun, or at least the version that I have is Sun only, but it is quite a nice
tool all the same.

Back to top

-------------------------------------------------------------------------------

4.4: How do I make isql secure?

-------------------------------------------------------------------------------

Isql uses the open/client libraries, which have no built in means of securing
the packets that I know of. However, it is possible to use ssh to do all of the
work for you. It is really quite straightforward. I saw this first published on
the Sybase-L list by Tim Ellis, so all of the credit gos to him.

1. You will need a server running sshd that you have access to, which also has
access to the ASE server.
2. Choose a port that you are going to make your secure connection from. Just
like all ASE port selections it is totally arbitrary, but you if you were
setting up a number of these, then you might want to think about a
strategy. Regular server + 100 or something. Just make sure that it does
not, and will not, clash with any of your regular servers.
3. Edit the interfaces file on the client side and set up a new server with an
IP address of localhost and the port number you chose in the previous
point. You might want to call it SERVER_SSH just to make sure that you know
that it is the secure one.
4. Run the following ssh command:
ssh -2 -N -f -L port_chosen_above:remote_server:remote_port
us...@ssh.server.com
5. Connect to the server using isql -Uuser -SSERVER_SSH

In the ssh line, the -2 means use that version of the protocol (obviously it
must be supported by your client and server). -f forces the ssh into the
background. Not supported by version 1 only clients. -N means do not prompt for
input. Again, this is not supported by version 1 clients.

The us...@ssh.server.com refers to the sshd server that you have access to.

Let us look at an example. You have a server running ASE on port 4100. (Make
sure that this port is *not* visible from the outside world, otherwise it is
wide open to people attacking it directly.) I have not tried all of the ins and
outs of this, I am happy to take advice, but on this same machine you have a
copy of sshd running that you can see from the outside world.

Choose another port that you are going to have as your secure port. Let's call
it 5100 for the sake of argument. Edit the interfaces file on the client
machine (which is presumably somewhere in untrusted land, say a client site)
and add a new server, lets call it MYSERVER_SSH and have it listen on
localhost,5100.

Now execute the ssh magic, again from the client machine:

ssh -2 -N -f -L 5100:myserver.com:4100 syb...@myserver.com

Now connect to it using

isql -Usa -SMYSERVER_SSH

and you should get the familiar 1> prompt. All traffic to and from the server
is going via an SSH tunnel, and so can be considered relatively secure.

Back to top

-------------------------------------------------------------------------------

bcp DBCCs ASE FAQ

bcp

5.1 How do I bcp null dates?
5.2 Can I use a named pipe to bcp/dump data out or in?
5.3 How do I exclude a column?

next prev ASE FAQ

-------------------------------------------------------------------------------

5.1: How do I bcp null dates?

-------------------------------------------------------------------------------

As long as there is nothing between the field delimiters in your data, a null
will be entered. If there's a space, the value will be Jan 1, 1900.

You can use sed(1) to squeeze blanks out of fields:

sed -e 's/|[ ]*|/||/g' old_file > new_file

Back to top

-------------------------------------------------------------------------------

5.2: Can I use a named pipe to bcp/dump data out or in?

-------------------------------------------------------------------------------

System 10 and above.

If you would like to bcp copy from one table to a named pipe and compress:

1. %mknod bcp.pipe p
2. %compress sysobjects.Z &
3. %bcp master..sysobjects out bcp.pipe -c -U .. > bcp.pipe
4. Use ps(1) to determine when the compress finishes.

To bcp from my1db..dummy_table_1 to my2db..dummy_table_2:

1. %mknod bcp.pipe p
2. %bcp my2db..dummy_table_2 in bcp.pipe -c -U .. &


To avoid confusion between the above bcp and the next, you may choose
to either use a separate window or redirect the output to a file.

3. %bcp my1db..dummy_table_1 out bcp.pipe -c -U ..

Back to top

-------------------------------------------------------------------------------

5.3: How do I exclude a column?

-------------------------------------------------------------------------------

Open/Client 11.1.1

Create a view based on the table that you want to exclude a column from and
then bcp out from the view.

Open/Client Versions Older Than 11.1.1

The documentation Utility programs for Unix describes the use of format files,
including the field Server Column Order. Server Column Order must equal the
colid of the column, or 0 if the host file field will not be loaded into any
table column.

I don't know if anyone has got this feature to work. So, here is another way of
removing the column. In your example, you want to remove the last column. I am
going to include another example to remove the second column and include a
fourth column. Why? Because it is harder. First example will deal with removing
the last column.

Removing the Last Column

Edit your bcpout.fmt file and look for the changes I made below. Using the
following bcpout.fmt file to dump the data:

--- bcpout.fmt
10.0
2 <------------------ Changed number of columns to BCP to two
1 SYBINT4 0 4 "<**>" 1 counter
2 SYBCHAR 1 512 "\n" 2 text1 <--- Replaced <**> with \n
3 SYBCHAR 1 512 "\n" 3 text2 <--- DELETE THIS LINE

Now recreate the table with the last column removed and use the same bcpout.fmt
file to BCP back in the data.

Now let's try removing the second column out four columns on a table.

Removing the Second out of Four Columns

Edit the bcpout.fmt file and look for the changes I made below. Using the
following bcpout.fmt file to dump the data:

--- bcpout.fmt
10.0
3 <------------------ Changed number of columns to BCP to three
1 SYBINT4 0 4 "<**>" 1 counter
2 SYBCHAR 1 512 "<**>" 2 text1 <--- DELETE THIS LINE
2 SYBCHAR 1 512 "<**>" 3 text2 <--- Changed number items to 2
3 SYBCHAR 1 512 "\n" 4 text3 <--- Changed number items to 3

Including the Fourth Column

Now copy the bcpout.fmt to bcpin.fmt, recreate table with col 2 removed, and
edit bcpin.fmt file:

--- bcpin.fmt
10.0
3
1 SYBINT4 0 4 "<**>" 1 counter
2 SYBCHAR 1 512 "<**>" 2 text2 <-- Changed column id to 2
3 SYBCHAR 1 512 "\n" 3 text3 <-- Changed column id to 3

-------------------------------------------------------------------------------

Back to top

next prev ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:12 AM4/20/04
to
Archive-name: databases/sybase-faq/part13

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

SQL Advanced

6.2.1 How to emulate the Oracle decode function/crosstab
6.2.2 How to implement if-then-else within a select-clause.
6.2.3 deleted due to copyright hassles with the publisher
6.2.4 How to pad with leading zeros an int or smallint.
6.2.5 Divide by zero and nulls.
6.2.6 Convert months to financial months.
6.2.7 Hierarchy traversal - BOMs.
6.2.8 Is it possible to call a UNIX command from within a stored
procedure or a trigger?
6.2.9 Information on Identities and Rolling your own Sequential Keys
6.2.10 How can I execute dynamic SQL with ASE
6.2.11 Is it possible to concatenate all the values from a column and
return a single row?
6.2.12 Selecting rows N to M without Oracle's rownum?
6.2.13 How can I return number of rows that are returned from a grouped
query without using a temporary table?

Useful SQL Tricks SQL Fundamentals ASE FAQ

-------------------------------------------------------------------------------

6.2.1: How to emulate the Oracle decode function/crosstab

-------------------------------------------------------------------------------

If you are using ASE version 11.5 or later, the simplest way to implement the
Oracle decode is with the CASE statement. The following code snippet should be
compared with the example using a characteristic function given below .

SELECT STUDENT_ID,
(CASE WHEN COURSE_ID = 101 THEN 1 ELSE 0 END) AS COURSE_101,
(CASE WHEN COURSE_ID = 105 THEN 1 ELSE 0 END) AS COURSE_105,
(CASE WHEN COURSE_ID = 201 THEN 1 ELSE 0 END) AS COURSE_201,
(CASE WHEN COURSE_ID = 210 THEN 1 ELSE 0 END) AS COURSE_210,
(CASE WHEN COURSE_ID = 300 THEN 1 ELSE 0 END) AS COURSE_300
GROUP BY STUDENT_ID
ORDER BY STUDENT_ID

However, if you have a version of ASE that does not support the case statement,
then you will have to try the following. There may be other reasons to try
characteristics functions. If you go to the Amazon web site and look for
reviews for of Rozenshteins book, Advanced SQL, you will see that one reviewer
believes that a true crosstab is not possible with the case statement. I am not
sure. I have also not done any performance tests to see which is quicker.

There is a neat way to use boolean logic to perform cross-tab or rotation
queries easily, and very efficiently. Using the aggregate 'Group By' clause in
a query and the ISNULL(), SIGN(), ABS(), SUBSTRING() and CHARINDEX() functions,
you can create queries and views to perform all kinds of summarizations.

This technique does not produce easily understood SQL statements.

If you want to test a field to see if it is equal to a value, say 100, use the
following code:

SELECT (1- ABS( SIGN( ISNULL( 100 - <field>, 1))))

The innermost function will return 1 when the field is null, a positive value
if the field < 100, a negative value if the field is > 100 and will return 0 if
the field = 100. This example is for Sybase or Microsoft SQL server, but other
servers should support most of these functions or the COALESCE() function,
which is the ANSI equivalent to ISNULL.

The SIGN() function returns zero for a zero value, -1 for a negative value, 1
for a positive value The ABS() function returns zero for a zero value, and > 1
for any non-zero value. In this case it will return 0 or 1 since the argument
is the function SIGN(), thus acting as a binary switch.

Put it all together and you get '0' if the value match, and '1' if they don't.
This is not that useful, so we subtract this return value from '1' to invert
it, giving us a TRUE value of '1' and a false value of '0'. These return values
can then be multiplied by the value of another column, or used within the
parameters of another function like SUBSTRING() to return a conditional text
value.

For example, to create a grid from a student registration table containing
STUDENT_ID and COURSE_ID columns, where there are 5 courses (101, 105, 201,
210, 300) use the following query:

Compare this version with the case statement above.

SELECT STUDENT_ID,
(1- ABS( SIGN( ISNULL( 101 - COURSE_ID, 1)))) COURSE_101,
(1- ABS( SIGN( ISNULL( 105 - COURSE_ID, 1)))) COURSE_105,
(1- ABS( SIGN( ISNULL( 201 - COURSE_ID, 1)))) COURSE_201,
(1- ABS( SIGN( ISNULL( 210 - COURSE_ID, 1)))) COURSE_210,
(1- ABS( SIGN( ISNULL( 300 - COURSE_ID, 1)))) COURSE_300
GROUP BY STUDENT_ID
ORDER BY STUDENT_ID

Back to top

-------------------------------------------------------------------------------

6.2.2: How to implement if-then-else in a select clause

-------------------------------------------------------------------------------

ASE 11.5 introduced the case statement, which can be used to replace a lot of
this 'trick' SQL with more readable (and standard) code. With a case statement,
an if then else is as easy as:

declare @val char(20)
select @val = 'grand'

select case when @val = 'small' then
'petit'
else
'grand'
end

However, quite a number of people are still using pre-11.5 implementations,
including those people using the free 11.0.3.3 Linux release. In that case you
can use the following recipe.

To implement the following condition in a select clause:

if @val = 'small' then
print 'petit'
else
print 'grand'
fi

in versions of ASE prior to 11.5 do the following:

select isnull(substring('petit', charindex('small', @val), 255), 'grand')

To test it out, try this:

declare @val char(20)
select @val = 'grand'
select isnull(substring('petit', charindex('small', @val), 255), 'grand')

This code is not readily understandable by most programmers, so remember to
comment it well.

Back to top

-------------------------------------------------------------------------------

6.2.3: Removed

-------------------------------------------------------------------------------

6.2.4: How to pad with leading zeros an int or smallint.

-------------------------------------------------------------------------------

By example:

declare @Integer int

/* Good for positive numbers only. */
select @Integer = 1000

select "Positives Only" =
right( replicate("0", 12) + convert(varchar, @Integer), 12)

/* Good for positive and negative numbers. */
select @Integer = -1000

select "Both Signs" =
substring( "- +", (sign(@Integer) + 2), 1) +
right( replicate("0", 12) + convert(varchar, abs(@Integer)), 12)

select @Integer = 1000

select "Both Signs" =
substring( "- +", (sign(@Integer) + 2), 1) +
right( replicate("0", 12) + convert(varchar, abs(@Integer)), 12)

go

Produces the following results:

Positives Only
--------------
000000001000

Both Signs
-------------
-000000001000

Both Signs
-------------
+000000001000

Back to top

-------------------------------------------------------------------------------

6.2.5: Divide by zero and nulls

-------------------------------------------------------------------------------

During processing, if a divide by zero error occurs you will not get the answer
you want. If you want the result set to come back and null to be displayed
where divide by zero occurs do the following:

1> select * from total_temp
2> go
field1 field2
----------- -----------
10 10
10 0
10 NULL

(3 rows affected)
1> select field1, field1/(field2*convert(int,
substring('1',1,abs(sign(field2))))) from total_temp
2> go
field1
----------- -----------
10 1
10 NULL
10 NULL

Back to top

-------------------------------------------------------------------------------

6.2.6: Convert months to financial months

-------------------------------------------------------------------------------

To convert months to financial year months (i.e. July = 1, Dec = 6, Jan = 7,
June = 12 )

Method #1

select ... ((sign(sign((datepart(month,GetDate())-6) * -1)+1) *
(datepart(month, GetDate())+6))
+ (sign(sign(datepart(month, GetDate())-7)+1) *
(datepart(month, GetDate())-6)))
...
from ...

Method #2

select charindex(datename(month,getdate()),
" July August September October November December
January Febuary March April May June "
) / 10

In the above example, the embedded blanks are significant.

Back to top

-------------------------------------------------------------------------------

David Owen

unread,
Apr 20, 2004, 9:45:14 AM4/20/04
to
Archive-name: databases/sybase-faq/part16

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Freeware

Sybase Tech Docs Open Client ASE FAQ

The best place to search for Sybase freeware is Ed Barlow (sql...@tiac.net)'s
site (http://www.edbarlow.com). He is likely to spend more time maintaining
his list than I will spend on this. I will do my best!

9.3.4 int.pl - converts interfaces file to tli
9.3.5 Sybase::Xfer.pm - Module to transfer data between two servers.
9.3.6 sybmon.pl - realtime process and lock monitor
9.3.7 showserver.pl - shows the servers on a particular machine in a
nice format.
9.3.8 Collection of Perl Scripts

Sybtcl

9.4.1 Sybtcl - TCL interface to Sybase.
9.4.2 sybdump - a Tcl script for dumping a database schema to disk
9.4.3 wisql - graphical sql editor and more

Python

9.5.1 Sybase Module for Python.

Tools, Utilities and Packages

9.6.1 sqsh - a superset of dsql with local variables, redirection,
pipes and all sorts of goodies.
9.6.2 lightweight Sybase Access via Win95/NT
9.6.3 BCPTool - a utility for trasferring data from ASE to another
(inc. native port to Linux).

'Free' Versions of ASE

The next couple of questions will move to the OS section (real) soon.

9.7.1 How to access a SQL Server using Linux see also Q11.4.6
9.7.2 Sybase on Linux Linux Penguin
9.7.3 How to configure shared-memory for Linux
9.7.4 Sybase now available on Free BSD

Other Sites of Interest

9.8.1 Ed Barlow's collection of Stored Procedures.

9.8.2 Examples of Open Client and Open Server programs -- see Q11.4.14
.
9.8.3 xsybmon - an X interface to sp_monitor

Sybase Tech Docs Open Client ASE FAQ

-------------------------------------------------------------------------------

9.0: Where is all the code and why does Section 9 suddenly load in a reasonable
amount of time?

-------------------------------------------------------------------------------

This section was in need of a spring clean, and it has now had it. I have
tested all of the stored procs included here against all versions of Sybase
that I have to hand. (11.0.3.3, 11.9.2 and 12.5 on Linux, 11.9.2 and 12 on
Solaris and 11.9.2 and 12 on NT.) If Pablo or the supplier documented that he
had tested it on other versions, then I have included those comments. Just
remember that I did not test them on anything pre-11.0.3.3. If you are still
using them on a pre-11.0.3.3 release (I know of at least one place that is
still running 4.9.2!) then let me know and I will add a suitable comment.

I have actually taken the code away and built a set of packages. First and
foremost is the stored proc package, then there is a shell script package, a
perl package and finally there is the archive package, which contains any stuff
specific to non-current releases of ASE.

In addition to wrenching out the code I have added some samples of the output
generated by the scripts. It occurred to me that people will be better able to
see if the stored proc does what they want if they can see what it produces.

Finally, part of the reason that this is here is so that people can examine the
code and see how other people write stored procs etc. Each stored proc is in a
file of its own so that you can choose which ones you wish to browse on-line
and then cut and paste them without having to go through the hassle of
un-htmling them.

Back to top

9.1.1: sp_freedevice

-------------------------------------------------------------------------------

This script displays the size of the devices configured for a server, together
with the free and used allocations.

Get it as part of the bundle (zip or tarball) or individually from here.

Output:

[30] BISCAY.master.1> sp_freedevice
[30] BISCAY.master.2>> go
total used free
--------------------- --------------------- ---------------------
950.00 MB 750.00 MB 200.00 MB

(1 row affected)
devname size used free
------------------------------ --------------------- --------------------- ---------------------
db01 100.00 MB 72.00 MB 28.00 MB
db02 100.00 MB 0.00 MB 100.00 MB
log01 100.00 MB 51.00 MB 49.00 MB
master 50.00 MB 27.00 MB 23.00 MB
sysprocsdev 200.00 MB 200.00 MB 0.00 MB
tlg01 200.00 MB 200.00 MB 0.00 MB
tmp01 200.00 MB 200.00 MB 0.00 MB

(7 rows affected, return status = 0)
[31] BISCAY.master.1>

Back to top

-------------------------------------------------------------------------------

9.1.2: sp_dos

-------------------------------------------------------------------------------

sp_dos displays the scope of an object within a database. What tables it
references, what other procedures it calls etc. Very useful for trying to
understand an application that you have just inherited.

Get it as part of the bundle (zip or tarball) or individually from here.

The output looks like this:

1> sp_dos sp_helpkey
2> go

** Utility by David Pledger, Strategic Data Systems, Inc. **
** PO Box 498, Springboro, OH 45066 **

SCOPE OF EFFECT FOR OBJECT: sp_helpkey
+------------------------------------------------------------------+
(P) sp_helpkey
|
+--(S) sysobjects
|
+--(S) syskeys
|
+--(P) sp_getmessage
|
+--(S) sysusermessages
|
+--(P) sp_validlang

(return status = 0)
1>

Back to top

-------------------------------------------------------------------------------

9.1.3: sp_whodo

-------------------------------------------------------------------------------

Sybase System 10.x and above

sp_whodo is an enhanced version of sp_who, with cpu and io usage for each user.
Note that this proc is now a little out of date since Sybase introduced the fid
column, so subordinate threads are unlikely to be grouped with their parent.

Get it as part of the bundle (zip or tarball) or individually from here.

Output:

1> sp_whodo
2> go
spid status loginame hostname blk blk_sec program
dbname cmd cpu io tran_name
------ ------------ ------------ ---------- --- ------- ----------------
------- ---------------- ------ ------- ----------------
2 sleeping NULL 0 0
master NETWORK HANDLER 0 0
4 sleeping NULL 0 0
master DEADLOCK TUNE 0 0
5 sleeping NULL 0 0
master MIRROR HANDLER 0 0
6 sleeping NULL 0 0 <astc>
master ASTC HANDLER 0 0
7 sleeping NULL 0 0
master CHECKPOINT SLEEP 0 128
8 sleeping NULL 0 0
master HOUSEKEEPER 0 33
17 running sa n-utsire.m 0 0 ctisql
master SELECT 0 1

(7 rows affected)

Back to top

-------------------------------------------------------------------------------

9.1.4: sp__revroles

-------------------------------------------------------------------------------

Well, I cannot get this one to do what it is supposed to, I am not sure if it
is just that it was written for a different release of Sybase and 11.9.2 and
above has changed the way that roles are built, or what. Anyway, I may work on
it some more.

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top

-------------------------------------------------------------------------------

9.1.5: sp__rev_configure

-------------------------------------------------------------------------------

This proc reverse engineers the configure settings. It produces a set of calls
to sp_configure for those values that appear in syscurconfigs. I am not sure
how relevant this is with the ability to save and load the config file.

Get it as part of the bundle (zip or tarball) or individually from here.

The output is as follows, however, I have edited away some of the values since
my list was considerably longer than this.

-- sp_configure settings
-------------------------------------------------------------
sp_configure 'recovery interval', 5


go
sp_configure 'allow updates', 0
go

sp_configure 'user connections', 25
go
sp_configure 'memory', 14336
go
sp_configure 'default character set id', 2
go
sp_configure 'stack size', 65536
go
sp_configure 'password expiration interval', 0
go
sp_configure 'audit queue size', 100
go
sp_configure 'additional netmem', 0
go
sp_configure 'default network packet size', 512
go
sp_configure 'maximum network packet size', 512
go
sp_configure 'extent i/o buffers',
go
sp_configure 'identity burning set factor', 5000
go
sp_configure 'size of auto identity', 10
go
sp_configure 'identity grab size', 1
go
sp_configure 'lock promotion threshold', 200
go

(41 rows affected)
(return status = 0)

Back to top

-------------------------------------------------------------------------------

9.1.6: sp_servermap

-------------------------------------------------------------------------------

A one stop shop for a quick peek at everything on the server.

Get it as part of the bundle (zip or tarball) or individually from here.

The output for a brand new 11.0.3.3 ASE on Linux server is as follows:

Current Date/Time
------------------------------ --------------------------
TRAFALGAR Jan 14 2001 1:48PM

Version

-------------------------------------------------------------------------------------------------

SQL Server/11.0.3.3 ESD#6/P-FREE/Linux Intel/Linux 2.2.14 i686/1/OPT/Fri Mar 17 15:45:30 CET 2000

A - DATABASE SEGMENT MAP
************************
db dbid segmap segs device fragment start (pg) size (MB)
--------------- ------ ----------- ---- --------------- ----------- ---------
master 1 7 LDS master 4 3.00
master 1 7 LDS master 3588 2.00
tempdb 2 7 LDS master 2564 2.00
model 3 7 LDS master 1540 2.00
sybsystemprocs 4 7 LDS sysprocsdev 16777216 150.00
sybsecurity 5 15 ULDS sybsecurity 33554432 300.00

Segment Codes:
U=User-defined segment on this device fragment
L=Database Log may be placed on this device fragment
D=Database objects may be placed on this device fragment by DEFAULT
S=SYSTEM objects may be placed on this device fragment


B - DATABASE INFORMATION
************************
db dbid size (MB) db status codes created
dump tran
--------------- ------ --------- ------------------ ---------------
---------------
master 1 5.00 01 Jan 00 00:00
07 Jan 01 04:01
tempdb 2 2.00 A 14 Jan 01 13:46
14 Jan 01 13:47
model 3 2.00 01 Jan 00 00:00
07 Jan 01 03:38
sybsystemprocs 4 150.00 B 07 Jan 01 03:32
14 Jan 01 13:43
sybsecurity 5 300.00 B 07 Jan 01 04:01
07 Jan 01 04:55

Status Code Key

Code Status
---- ----------------------------------
A select into/bulk copy allowed
B truncate log on checkpoint
C no checkpoint on recovery
D db in load-from-dump mode
E db is suspect
F ddl in tran
G db is read-only
H db is for dbo use only
I db in single-user mode
J db name has been changed
K db is in recovery
L db has bypass recovery set
M abort tran on log full
N no free space accounting
O auto identity
P identity in nonunique index
Q db is offline
R db is offline until recovery completes


C - DEVICE ALLOCATION MAP
*************************
device fragment start (pg) size (MB) db lstart segs
--------------- ----------- --------- --------------- ----------- ----
master 4 3.00 master 0 LDS
master 1540 2.00 model 0 LDS
master 2564 2.00 tempdb 0 LDS
master 3588 2.00 master 1536 LDS
sybsecurity 33554432 300.00 sybsecurity 0 ULDS
sysprocsdev 16777216 150.00 sybsystemprocs 0 LDS

Segment Codes:
U=USER-definedsegment on this device fragment
L=Database LOG may be placed on this device fragment
D=Database objects may be placed on this device fragment by DEFAULT
S=SYSTEM objects may be placed on this device fragment


D - DEVICE NUMBER, DEFAULT & SPACE USAGE
****************************************
device vdevno default disk? total (MB) used free
--------------- ------ ------------- ---------- ------- -------
master 0 Y 100.00 9.00 91.00
sysprocsdev 1 N 150.00 150.00 0.00
sybsecurity 2 N 300.00 300.00 0.00

E - DEVICE LOCATION
*******************
device location
--------------- ------------------------------------------------------------
master d_master
sybsecurity /d/TRAFALGAR/3/sybsecur.dat
sysprocsdev /d/TRAFALGAR/2/sybprocs.dat

NO DEVICES ARE MIRRORED
(return status = 0)

Back to top

-------------------------------------------------------------------------------

9.1.7: sp__create_crosstab

-------------------------------------------------------------------------------

Hmmm... not quite sure about this one. Was not 100% sure about how to set it
up. From the description it builds a cross tab query. If someone knows how to
use this, then let me know how to set it up and I will improve the description
here and provide some output.

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top

-------------------------------------------------------------------------------

9.1.8: sp_ddl_create_table

-------------------------------------------------------------------------------

Well, you all know what a create table statement looks like... This produces
the table definitions in their barest form (lacking in constraints etc) and the
resulting DDL is perhaps not as elegant as some other utilities, but far be it
from me to blow dbschema's trumpet :-), but it is worth a look just for the
query. The layout of the carriage returns being embedded within strings is
deliberate!

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top

-------------------------------------------------------------------------------

9.1.9: sp_spaceused_table

-------------------------------------------------------------------------------

Brief

In environment where there are a lot of temporary tables #x being created, how
do you tell who is using how much space ? The answer is sp_spaceused_table,
which basically lists the tables in a database with rowcount and space usage
statistics. I have replaced the original proc with K-shell script for a single
proc. I think that it is easier to compare if it is all in one listing.
However, if you disagree I will add the original code to the archive package,
just let me know.

Get it as part of the bundle (zip or tarball) or individually from here.

The output of the proc is as follows: (I used sqsh, hence the prompt, since it
auto-resizes its width as you resize the xterm.)

[25] N_UTSIRE.tempdb.1> sp_spaceused_table
[25] N_UTSIRE.tempdb.2> go
name rowtotal reserved data index_size unused
--------------------------------------------- ----------- --------------- --------------- --------------- ---------------
#matter______00000010014294376 12039 3920 KB 3910 KB 0 KB 10 KB
#synopsis____00000010014294376 6572 15766 KB 274 KB 15472 KB 20 KB
#hearing_____00000010014294376 5856 572 KB 568 KB 0 KB 4 KB
#hearing2____00000010014294376 5856 574 KB 568 KB 0 KB 6 KB
#hearing3____00000010014294376 5856 574 KB 568 KB 0 KB 6 KB
#synopsis2___00000010014294376 6572 15820 KB 274 KB 15472 KB 74 KB

(return status = 0)

Back to top

-------------------------------------------------------------------------------

David Owen

unread,
Apr 20, 2004, 9:45:13 AM4/20/04
to
Archive-name: databases/sybase-faq/part15

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Open Client

7.1 What is Open Client?
7.2 What is the difference between DB-lib and CT-lib?
7.3 What is this TDS protocol?
7.4 I have upgraded to MS SQL Server 7.0 and can no longer connect from
Sybase's isql.
7.5 The Basics of Connecting to Sybase
7.6 Connecting to ASE using ODBC
7.7 Which version of Open Client works with which ASE?
7.8 How do I tell the version of Open Client I am running?

Freeware Useful SQL Tricks ASE FAQ

-------------------------------------------------------------------------------

7.1: What is Open Client?

-------------------------------------------------------------------------------

Open Client is the interface (API) between client systems and Sybase servers.
Fundamentally, it comes in two forms:

Runtime

The runtime version is a set of dynamic libraries (dlls on W32 platforms) that
allow client applications to connect to Sybase and Microsoft servers, or, in
fact, any server that implements the Tabular Data Streams (TDS) protocol. You
need some form of Open Client in order to be able to connect to ASE in any way,
shape or form. Even if you are running isql on exactly the same machine as
ASE itself, communication will still be via Open Client. That is not to say
that client to server communication on the same machine will go via the
physical network, that decision is left entirely to the protocol implementation
on the machine in question.

Development

The development version contains all of the libraries from the runtime version,
plus the header files and other files, library files etc, that enable
developers to build client apps that are able to connect to Sybase servers.

Back to top

-------------------------------------------------------------------------------

7.2: What is the difference between DB-lib and CT-lib?

-------------------------------------------------------------------------------

Both DB-lib and CT-lib are libraries that implement the TDS protocol from the
client side.

DB-lib

DB-lib was Sybase's first version. It was a good first attempt, but has/had a
number of inconsistencies. There are, or possibly were, a lot of applications
written using DB-lib. If you are about to start a new Open Client development,
consider using CT-lib, it is the preferred choice. (What version of TDS does
DB-lib, is it only 4.2?)

Having said that you should use CT-lib for new developments, there is one case
that this may not be true for and that is 2 phase commit. 2 phase commit is
supported directly by DB-lib but is not supported directly by CT-lib.

CT-lib

CT-lib is a completely re-written version of Open Client that was released in
the early '90s. The API is totally different from DB-lib, and is much more
consistent. Applications written using DB-lib cannot simply be compiled using
CT-lib, they need a significant amount of porting effort. CT-lib is newer,
more consistent and, in several people's opinions, including mine, slightly
longer winded. Having said that, the future of DB-lib is uncertain and is
certainly not being developed any more, as a result all new apps should be
written using CT-lib.

Back to top

-------------------------------------------------------------------------------

7.3: What is this TDS protocol?

-------------------------------------------------------------------------------

Tabular Data Streams or TDS is the name given to the protocol that is used to
connect Sybase clients with Sybase servers. A specification for the protocol
can be obtained from Sybase, I had a copy but cannot seem to find it now.

The is a project that is reverse engineering the protocol and building a set of
libraries independent of either Sybase or Microsoft, but able to connect to
either of their servers. FreeTDS is a considerable way down the line, although
I do not believe that it is production ready yet!

As part of the project, they have started to document the protocol, and a view
of TDS 5.0 can be seen here.

Back to top

-------------------------------------------------------------------------------

7.4: I have upgraded to MS SQL Server 7.0 and can no longer connect from
Sybase's isql.

-------------------------------------------------------------------------------

Microsoft SQL Server has always supported the TDS protocol, and up to release 7
it has been the primary means of communication between clients and servers.
With release 7, TDS has been reduced to being a "legacy" protocol. (I do not
know what the communication protocol/mechanism with release 7 is, you will need
to talk to someone from Microsoft or search comp.databases.ms-sqlserver .)

In order to connect to MS Sql Server 7 using Sybase's Open Client you will need
to install Service Pack 2 of SQL Server 7, available from http://
www.microsoft.com.

Back to top

-------------------------------------------------------------------------------

7.5: The Basics of Connecting to Sybase

-------------------------------------------------------------------------------

The following describes how to connect to Sybase ASE on a UNIX machine from a
windows client with isql etc. The specific example is Sybase ASE 11.9 on
Redhat Linux 6.1, using Windows 95 and NT. (Have both on partitions and the
process was the same... This is not a technical review or an in-depth
discussion (there are people far more qualified than me for that ;-) ). Rather
it is more along the lines of "This is how I managed it, it should work for
you". As always there are no guarantees, so it if goes wrong, it's your fault
[<g>].

The starting point for this discussion has to be, you've downloaded (or
whatever means you used to acquire it) both Sybase ASE for Linux and the PC
Client software (a big zip file) and are ready to install. I'm not going to
discuss the install process as Sybase managed to do a good job of that, so
I'm leaving well alone. The bit you have to take notice of is when you run
srvbuild. This should happen the first time you log on as the user sybase after
the install. If it doesn't then you can run it by hand after, it line in the
$SYBASE directory under bin. The reason why I'm mentioning this is that
srvbuild defaults to installing your database using the name "localhost". Now
the problem with localhost is that it is kind of a special case and would mean
that you could not connect to your database from anywhere other that the server
itself. This would defeat the object of this
discussion, so simply name it something else, bob, george, albert, mydatabase,
whatever, the choice is yours.

Having done this (it takes a while to complete) you should now have a running
database. so try to connect to it on the local machine with something like isql
-SServerName -Usa, (where ServerName is whatever you called it when you ran
srvbuild) when it asks for a password, just press enter and you should be
greeted by the monumentous welcome

1>

Not a lot for all the work you have done to get to this point, but you've
connected to your database and that's the main thing. This is very important as
not only does this mean that your database is working, but it also means that
the server half of Open Client is working. This is because even isql on the
server connects to the database using Open Client and you've just proved it
works, cool. Next run dsedit on the server and make a note of the following 3
things:

1: The server name
2: The IP address
3: The port

Your going to need these to get connected from windows.

Now switch to you windows machine, did I remember to tell you to shut down
dsedit on the server?, consider it said ;-). Unpack the PC Client software zip
file and install it using the instructions that came with it. They worked fine
for me and I'm an idiot, so they should work for you. When you've finished, go
to the start menu and start dsedit (on my machine it's under programs ->
sybase). When it runs, it begins with a dialog asking you which Interface
driver to open, I've done this 3 times and went with the default every time, so
it should be a safe bet. At this point you can now add your Linux based server.
Select the menu item serverobject->add. Then enter the name of the server you
just got from your Linux box, in the field labeled "server". It is probably a
good idea that it is the same name you got from your Linux based dsedit to
ensure that everyone is referring to the same server with the same name.
Prevents confusion. This then opens a new window with several fields, one of
which is the server name you just entered. The bottom field is the bit where
you enter the "nitty gritty", the server IP address and port. To do this right
click on the field and select "modify attribute" to open the server address
dialog. When this new dialog opens click add to open yet another dialog (is
there an award for the most gratuitous use of the word dialog???). OK, this is
the last one, honest. Leave the drop down list where it is (hopefully showing
TCP/IP or something similar). Instead move straight to the address field and
enter the following: the Linux servers IP address followed by the port number
(the one from the server dsedit), separated by a comma. On my machine it looks
like this.

192.0.0.2,2501

Now you can "OK" your way back out of the dialogs, back up to where you started
from and exit dsedit. Then launch isql on the windows box and log in.
Personally I did this from a DOS prompt, using exactly the same syntax I did on
the Linux box, but that's just because I like it that way. Now you should be
happily querying you Linux (or other UNIX for that matter) based Sybase ASE
database. What you do with it now, is covered elsewhere in this FAQ from people
able to tell you, unlike me. Now just one more time for good measure, I'm going
to type the word, wait for it.... Dialog.

Back to top

-------------------------------------------------------------------------------

7.6: Connecting to ASE Using OLTP

-------------------------------------------------------------------------------

To begin with you need to be certain that you can connect to your Linux hosted
Sybase ASE database from your windows based machine. Do this by running isql
from your Linux box and connect to the database, if this works, then your all
set (See Q7.5). You will need the Sybase ODBC driver, this came with the PC
Client package. If you got your Windows Open Client software through some other
means, then you may need to down load the ODBC driver, this will become
apparent later. Right, begin by launching the 32 bit ODBC administrator, either
from the Sybase menu under start -> programs or the control panel. Ensure that
you are displaying the "user DSN" section (by clicking on the appropriate tab).

You can then click on the button labeled add to move to the driver selection
dialog. Select Sybase System 11 and click on finish. You will by now have
noticed that this is Microsoft's way of taunting you and you haven't actually
finished yet, you're actually at the next dialog. What you have actually done
is told windows that you are now about to configure your Sybase ODBC driver.
There are 4 boxes on the dialog with which you are now presented, and they are:

Data Source Name
Description
Server Name
Database Name

The data source name is the Server name from your interfaces file on your Linux
server. If you are uncertain of any of these values, then log onto your Linux
box, run dsedit and take a look. It will only take you 2 minutes and much
easier than debugging it later. The description field is irrelevant and you can
put anything in there that is meaningful to you. Server name is the IP address
of the Linux server, that is hosting your database. Database name is the name
of a database to which you want to connect, once your Sybase connection has
been established. If in doubt, you can stick master in there for now, at least
you'll get a connection. Now you can click on OK to get back to the starting
screen, followed by another OK to exit ODBC administrator. We will now test the
connection by running Sybase Central. I chosen this because I figure that if
you downloaded the PC Client package, then I know you've got it (at least I'm
fairly sure). When you launch Sybase administrator from start->programs->
Sybase, you are presented with a connection dialog. There are 3 fields in this
box

User ID
Password
Server Name

In the field labeled UserID, you can type in sa. If you've been doing some work
on Sybase through other means and you have already created a valid user, then
you can use him (her, it, whatever). In the password field, type in the
appropriate password. Assuming you have changed nothing from the
original Sybase install and you are using sa, then you will leave this blank.
The final field is a dropdown list box containing all the Sybase remote
connections you have. Assuming you only have the one, then you can leave this
alone. If you have more than one, stick to the one that you know works for now
and that allows access to the user you've used. In simple English (and if you
don't speak English, then I hope somebody has translated it :-) ). If this is a
clean install and you have altered nothing after following the instruction
earlier to establish an Open Client, then the top box should contain simply
"sa", the middle box should be blank, and the bottom list-box should contain
whatever the servername is in your Linux based interfaces file. Clicking on OK
will now connect Sybase Central to the database and "away you go"...

Hope this is of some assistance to you, but if you run into problems then I
suggest you post to the newsgroup, which is where the real experts hang out. I
am unlikely to be able to help you, as I have simply noted down my experiences
as I encountered them, in the hope they may help somebody out.
I take no responsibility for anything, including any result of following the
instructions in this text.
Good luck...

Jim

Back to top

-------------------------------------------------------------------------------

7.7: Which version of Open Client works with which ASE?

-------------------------------------------------------------------------------

The TDS protocol that *is* Open Client is built so that either the client or
server will fallback to a common dialect. I suppose that it is theoretically
possible that both would fallback for some reason, but it seems unlikely. I was
recently working with a client that was using Open/Client 4.2 to speak to a
version 11.5 ASE using Powerbuilder 3 and 4! Amazing, it all worked! The main
problem that you will encounter is not lack of communication but lack of
features. The facility to bcp out of views was added to the 11.1.1 release. You
will still be able to connect to servers with old copies of Open/Client, you
just won't have all of the features.

There is also another fairly neat feature of the later releases of Open/Client,
it has a very good compatibility mode for working with old applications. The
client that was running Open/Client 4.2 with Powerbuilder 3 is now connecting
to the database using version 11.1.1. Which is not bad when you remember that
Powerbuilder 3 only talked 4.2 DBLib!

Back to top

-------------------------------------------------------------------------------

7.8: How do I tell the version of Open Client I am running?

-------------------------------------------------------------------------------

Run

isql -v

from the command line, will return a string like:

Sybase CTISQL Utility/11.1.1/P-EBF7729/PC Intel/1/ OPT/Thu Dec 18 01:05:29 1997

The 11.1.1 part represents the version number.

Back to top

-------------------------------------------------------------------------------

Freeware Useful SQL Tricks ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:13 AM4/20/04
to
Archive-name: databases/sybase-faq/part14

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

6.2.7: Hierarchy traversal - BOMs

-------------------------------------------------------------------------------

Alright, so you wanna know more about representing hierarchies in a relational
database? Before I get in to the nitty gritty I should at least give all of the
credit for this algorithm to: "_Hierarical_Structures:_The_Relational_Taboo!_,
_(Can_ Transitive_Closure_Queries_be_Efficient?)_", by Michael J. Kamfonas as
published in 1992 "Relational Journal" (I don't know which volume or issue).

The basic algorithm goes like this, given a tree (hierarchy) that looks roughly
like this (forgive the ASCII art--I hope you are using a fixed font to view
this):

a
/ \
/ \
/ \
b c
/ \ /|\
/ \ / | \
/ \ / | \
d e f | g


Note, that the tree need not be balanced for this algorithm to work.

The next step assigned two numbers to each node in the tree, called left and
right numbers, such that the left and right numbers of each node contain the
left and right numbers of the ancestors of that node (I'll get into the
algorithm for assigning these left and right numbers later, but, hint: use a
depth-first search):

1a16
/ \
/ \
/ \
2b7 8c15
/ \ /|\
/ \ / | \
/ \ / | \
3d4 5e6 9f10 11g12 13h14


Side Note: The careful observer will notice that these left and right
numbers look an awful lot like a B-Tree index.

So, you will notice that all of the children of node 'a' have left and right
numbers between 1 and 16, and likewise all of the children of 'c' have left and
right numbers between 8 and 15. In a slightly more relational format this table
would look like:

Table: hier
node parent left_nbr right_nbr
----- ------ -------- ---------
a NULL 1 16
b a 2 7
c a 8 15
d b 3 4
e b 5 6
f c 9 10
g c 11 12
h c 13 14

So, given a node name, say @node (in Sybase variable format), and you want to
know all of the children of the node you can do:

SELECT h2.node
FROM hier h1,
hier h2
WHERE h1.node = @node
AND h2.left_nbr > h1.left_nbr
AND h2.left_nbr < h1.right_nbr

If you had a table that contained, say, the salary for each node in your
hierarchy (assuming a node is actually a individual in a company) you could
then figure out the total salary for all of the people working underneath of
@node by doing:

SELECT sum(s.salary)
FROM hier h1,
hier h2,
salary s
WHERE h1.node = @node
AND h2.left_nbr > h1.left_nbr
AND h2.right_nbr > h1.right_nbr
AND s.node = h2.node

Pretty cool, eh? And, conversely, if you wanted to know how much it cost to
manage @node (i.e. the combined salary of all of the boss's of @node), you can
do:

SELECT sum(s.salary)
FROM hier h1,
hier h2,
salary s
WHERE h1.node = @node
AND h2.left_nbr < h1.left_nbr
AND h2.left_nbr > h1.right_nbr
AND s.node = h2.node

Now that you can see the algorithm in action everything looks peachy, however
the sticky point is the method in which left and right numbers get assigned.
And, unfortunately, there is no easy method to do this relationally (it can be
done, it just ain't that easy). For an real- world application that I have
worked on, we had an external program used to build and maintain the
hierarchies, and it was this program's responsibility to assign the left and
right numbers.

But, in brief, here is the algorithm to assign left and right numbers to every
node in a hierarchy. Note while reading this that this algorithm uses an array
as a stack, however since arrays are not available in Sybase, they are
(questionably) emulated using a temp table.

DECLARE @skip int,
@counter int,
@idx int,
@left_nbr int,
@node varchar(10)

/*-- Initialize variables --*/
SELECT @skip = 1000, /* Leave gaps in left & right numbers */
@counter = 0, /* Counter of next available left number */
@idx = 0 /* Index into array */

/*
* The following table is used to emulate an array for Sybase,
* for Oracle this wouldn't be a problem. :(
*/
CREATE TABLE #a (
idx int NOT NULL,
node varchar(10) NOT NULL,
left_nbr int NOT NULL
)

/*
* I know that I always preach about not using cursors, and there
* are ways to get around it, but in this case I am more worried
* about readability over performance.
*/
DECLARE root_cur CURSOR FOR
SELECT h.node
FROM hier h
WHERE h.parent IS NULL
FOR READ ONLY

/*
* Here we are populating our "stack" with all of the root
* nodes of the hierarchy. We are using the cursor in order
* to assign an increasing index into the "stack"...this could
* be done using an identity column and a little trickery.
*/
OPEN root_cur
FETCH root_cur INTO @node
WHILE (@@sqlstatus = 0)
BEGIN
SELECT @idx = @idx + 1
INSERT INTO #a VALUES (@idx, @node, 0)
FETCH root_cur INTO @node
END
CLOSE root_cur
DEALLOCATE CURSOR root_cur

/*
* The following cursor will be employed to retrieve all of
* the children of a given parent.
*/
DECLARE child_cur CURSOR FOR
SELECT h.node
FROM hier h
WHERE h.parent = @node
FOR READ ONLY

/*
* While our stack is not empty.
*/
WHILE (@idx > 0)
BEGIN
/*
* Look at the element on the top of the stack.
*/
SELECT @node = node,
@left_nbr = left_nbr
FROM #a
WHERE idx = @idx

/*
* If the element at the top of the stack has not been assigned
* a left number yet, then we assign it one and copy its children
* on the stack as "nodes to be looked at".
*/
IF (@left_nbr = 0)
BEGIN
/*
* Set the left number of the current node to be @counter + @skip.
* Note, we are doing a depth-first traversal, assigning left
* numbers as we go.
*/
SELECT @counter = @counter + @skip
UPDATE #a
SET left_nbr = @counter
WHERE idx = @idx

/*
* Append the children of the current node to the "stack".
*/
OPEN child_cur
FETCH child_cur INTO @node
WHILE (@@sqlstatus = 0)
BEGIN
SELECT @idx = @idx + 1
INSERT INTO #a VALUES (@idx, @node, 0)
FETCH child_cur INTO @node
END
CLOSE child_cur

END
ELSE
BEGIN
/*
* It turns out that the current node already has a left
* number assigned to it, so we just need to assign the
* right number and update the node in the actual
* hierarchy.
*/
SELECT @counter = @counter + @skip

UPDATE h
SET left_nbr = @left_nbr,
right_nbr = @counter
WHERE h.node = @node

/*
* "Pop" the current node off our "stack".
*/
DELETE #a WHERE idx = @idx
SELECT @idx = @idx - 1
END
END /* WHILE (@idx > 0) */
DEALLOCATE CURSOR child_cur

While reading through this, you should notice that assigning the left and right
numbers to the entire hierarchy is very costly, especially as the size of the
hierarchy grows. If you put the above code in an insert trigger on the hier
table, the overhead for inserting each node would be phenomenal. However, it is
possible to reduce the overall cost of an insertion into the hierarchy.

1. By leaving huge gaps in the left & right numbers (using the @skip
variable), you can reduce the circumstances in which the numbers need to be
reassigned for a given insert. Thus, as long as you can squeeze a new node
between an existing pair of left and right numbers you don't need to do the
re-assignment (which could affect all of the node in the hierarchy).
2. By keeping an extra flag around in the hier table to indicate which nodes
are leaf nodes (this could be maintained with a trigger as well), you avoid
placing leaf nodes in the array and thus reduce the number of updates.

Deletes on this table should never cause the left and right numbers to be
re-assigned (you could even have a trigger automagically re-parent orphaned
hierarchy nodes).

All-in-all, this algorithm is very effective as long as the structure of the
hierarchy does not change very often, and even then, as you can see, there are
ways of getting around a lot of its inefficiencies.

Back to top

-------------------------------------------------------------------------------

6.2.8: Calling OS commands from a trigger or a stored procedure

-------------------------------------------------------------------------------

11.5 and above

The Adaptive Server (11.5) will allow O/S calls from within stored procedures
and triggers. These stored procedures are known as extended stored procedures.

Pre-11.5

Periodically folks ask if it's possible to make a system command or call a UNIX
process from a Trigger or a Stored Procedure.

Guaranteed Message Processing

The typical ways people have implemented this capability is:

1. Buy Open Server and bind in your own custom stuff (calls to system() or
custom C code) and make Sybase RPC calls to it.
2. Have a dedicated client application running on the server box which
regularly scans a table and executes the commands written into it (and
tucks the results into another table which can have a trigger on it to
gather results...). It is somewhat tricky but cheaper than option 1.

Sybase ASE 10.0.2.5 and Above - syb_sendmsg()

This release includes a new built-in function called syb_sendmsg(). Using this
function you can send a message up to 255 bytes in size to another application
from the ASE. The arguments that need to be passed to syb_sendmsg() are the IP
address and port number on the destination host, and the message to be sent.
The port number specified can be any UDP port, excluding ports 1-1024, not
already in use by another process. An example is:

1> select syb_sendmsg("120.10.20.5", 3456, "Hello")
2> go

This will send the message "Hello" to port 3456 at IP address '120.10.20.5'.
Because this built-in uses the UDP protocol to send the message, the ASE does
not guarantee the receipt of the message by the receiving application.

Also, please note that there are no security checks with this new function.
It is possible to send sensitive information with this command and Sybase
strongly recommends caution when utilizing syb_sendmsg to send sensitive
information across the network. By enabling this functionality, the user
accepts any security problems which result from its use (or abuse).

To enable this feature you should run the following commands as the System
Security Officer.

1. Login to the ASE using 'isql'.
2. Enable the syb_sendmsg() feature using sp_configure.
1> sp_configure "allow sendmsg", 1
2> go

1> sp_configure "syb_sendmsg port number", <port number>
2> go

1> reconfigure with override -- Not necessary with 11.0 and above
2> go

The server must be restarted to set the port number.

Using syb_sendmsg() with Existing Scripts

Since syb_sendmsg() installs configuration parameter "allow sybsendmsg",
existing scripts that contain the syntax

1> sp_configure allow, 1
2> go

to enable updates to system tables should be altered to be fully qualified as
in the following:

1> sp_configure "allow updates", 1
2> go

If existing scripts are not altered they will fail with the following message:

1> sp_configure allow, 1
2> go
Configuration option is not unique.
duplicate_options
----------------------------
allow updates
allow sendmsg

(return status = 1)

(The above error is a little out of date for the latest releases of ASE, there
are now 8 rows that contain "allow", but the result is the same.)

Backing Out syb_sendmsg()

The syb_sendmsg() function requires the addition on two config values. If it
becomes necessary to roll back to a previous ASE version which does not include
syb_sendmsg(), please follow the instructions below.

1. Edit the RUNSERVER file to point to the SWR ASE binary you wish to use.
2. isql -Usa -P<sa password> -Sserver_name -n -iunconfig.sendmsg -ooutput_file

Sample C program

#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <unistd.h>
#include <fcntl.h>

main(argc, argv)
int argc; char *argv[];
{

struct sockaddr_in sadr;
int portnum,sck,dummy,msglen;
char msg[256];

if (argc <2) {
printf("Usage: udpmon <udp portnum>\n");
exit(1);
}

if ((portnum=atoi(argv[1])) <1) {
printf("Invalid udp portnum\n");
exit(1);
}

if ((sck="socket(AF_INET,SOCK_DGRAM,IPPROTO_UDP))" < 0) {
printf("Couldn't create socket\n");
exit(1);
}

sadr.sin_family = AF_INET;
sadr.sin_addr.s_addr = inet_addr("0.0.0.0");
sadr.sin_port = portnum;

if (bind(sck,&sadr,sizeof(sadr)) < 0) {
printf("Couldn't bind requested udp port\n");
exit(1);
}

for (;;)
{

if((msglen="recvfrom(sck, msg, sizeof(msg), 0, NULL, &dummy))" < 0)
printf("Couldn't recvfrom() from udp port\n");

printf("%.*s\n", msglen, msg);
}
}

Back to top

-------------------------------------------------------------------------------

6.2.9: Identities and Sequential Keys

-------------------------------------------------------------------------------

This has several sections, culled from various sources. It is better described
as "Everything you've ever wanted to know about identities." It will serve to
answer the following frequently asked questions:

What are the Features and Advantages of using Identities?
What are the Problems with and Disadvantages of Identities?
Common Questions about Identities

* Is Identity the equivalent of Oracle's Auto-sequencing?
* How do I configure a table to use the Identity field?
* How do I configure the burn factor?
* How do I find out if my tables have Identities defined?
* What is my current identity burn factor vulnerability?

How do I optimize the performance of a table that uses Identities?
How do I recover from a huge gap in my identity column?
How do I fix a table that has filled up its identity values?

OK, I hate identities. How do I generate sequential keys without using the
Identity feature?
How do I optimize a hand-made sequential key system for best performance?

- Question 8.1 of the comp.database.sybase FAQ has a quick blurb about
identities and sequential numbers. Search down in the page for the section
titled, "Generating Sequential Numbers." Question 8.1 is a general document
describing Performance and Tuning topics to be considered and thus doesn't go
into as much detail as this page.

- There's a white paper by Malcolm Colton available from the sybase web site.
Goto the Sybase web site http://www.sybase.com and type Surrogate in the search
form. Select the Surrogate Primary Keys, Concurrency, and the Cache Hit Ratio
document.

-------------------------------------------------------------------------------

Advantages/Features of Using Identities


There's an entire section devoted to Identity columns in the ASE Reference
manual, Chapter 5

Sybase System 10 introduced many changes over the 4.9.x architecture. One of
these changes was the Identity feature. The identity column is a special column
type that gets automatically updated by the server upon a new row insert. Its
purpose is to guarantee a unique row identifier not based on the other data in
the row. It was integrated with the server and made memory based for fast value
retrieval and no locking (as was/is the case with homegrown sequential key
generation schemes).

The Advantages and Features of Identities include:

* A non-SQL based solution to the problem of having an default unique value
assigned to a row. ASE prefetches identity values into cache and adds them
automatically to rows as they're inserted into tables that have a type
Identity column. There's no concurrency issues, no deadlocking in
high-insert situations, and no possibility of duplicate values.
* A high performance Unique identifier; ASE's optimizer is tuned to work well
with Unique indexes based on the identity value.
* The flexibility to insert into the identity field a specific value in the
case of a mistaken row deletion. (You can never update however). You
accomplish this by:
1> set identity_insert [datababase]..[table] on
2> go

Note however that the System will not verify the uniqueness of the value
you specifically insert (unless of course you have a unique index existing
on the identity column).

* The flexibility during bcp to either retain existing identity values or to
reset them upon bcping back in. To retain the specific identity values
during a bcp out/in process, bcp your data out normally (no special
options). Then create your bcp in target table with ddl specifying the
identity column in the correct location. Upon bcp'ing back in, add the "-E"
option at the end of the bcp line, like this (from O/S prompt):
% bcp [database]..[new_table] in [bcp datafile] -Usa -S[server] -f [fmt file] -E

For procedures on resetting identity values during a bcp, see the section
regarding Identity gaps.

* Databasewide Identity options: 1) The ability to set Sybase to
automatically create an Identity column on any table that isn't created
with a primary key or a unique constraint specified. 2) Sybase can
automatically include an Identity field in all indexes created,
guaranteeing all will be unique. These two options guarantee increased
index performance optimization and guarantees the use of updateable cursors
and isolation level 0 reads.
These features are set via sp_dboption, like this:
1> sp_dboption [dbname], "auto identity", true
2> go
or
1> sp_dboption [dbname], "identity in nonunique index", true
2> go

To tune the size of the auto identity (it defaults to precision 10):

1> sp_configure "size of auto identity", [desired_precision]
2> go

(the identity in nonunique index db_option and the size of auto identity
sp_configure value are new with System 11: the auto identity existed with
the original Identity feature introduction in System 10)

Like other dboptions, you can set these features on the model database
before creating new databases and all your future databases will be
configured. Be warned of the pitfalls of large identity gaps however; see
the question regarding Burn Factor Vulnerability in the Common Questions
about Identities section.

* The existence of the @@identity global variable, which keeps track of the
identity value assigned during the last insert executed by the server. This
variable can be used programming SQL around tables that have identity
values (in case you need to know what the last value inserted was). If the
last value inserted in the server was to a non-identity table, this value
will be "0."

Back to start of 6.2.9

-------------------------------------------------------------------------------

Disadvantages/Drawbacks of Using Identities

Despite its efficacy of use, the Identity has some drawbacks:

* The mechanism that Sybase uses to allocate Identities involves a memory
based prefetch scheme for performance. The downside of this is, during
non-normal shutdowns of ASE (shutdown with nowait or flat out crashes) ASE
will simply discard or "burn" all the unused identity values it has
pre-allocated in memory. This sometimes leaves large "gaps" in your
monotonically increasing identity columns and can be unsettling for some
application developers and/or end users.

NOTE: Sybase 11.02.1 (EBF 6717) and below had a bug (bugid 96089) which
would cause "large gaps to occur in identity fields after polite
shutdowns." The Sybase 11.02.2 rollup (EBF 6886) fixed this problem. If
you're at or below 11.02.1 and you use identities, you should definitely
upgrade.

* (paraphrased from Sybooks P&T guide, Chapter 6): If you do a large number
of inserts and you have built your clustered index on an Identity column,
you will have major contention and deadlocking problems. This will
instantly create a hot spot in your database at the point of the last
inserted row, and it will cause bad contention if multiple insert requests
are received at once. Instead, create your clustered index on a field that
will somewhat randomize the inserts across the physical disk (such as last
name, account number, social security number, etc) and then create a
non-clustered index based on the identity field that will "cover" any
eligible queries.

The drawback here, as pointed out in the Identity Optimization section in
more detail, is that clustering on another field doesn't truly resolve the
concurrency issues. The hot spot simply moves from the last data page to
the last non-clustered index page of the index created on the Identity
column.

* If you fill up your identity values, no more inserts can occur. This can be
a big problem, especially if you have a large number of inserts and you
have continually crashed your server. However this problem most often
occurs when you try to alter a table and add an Identity column that's too
small, or if you try to bcp into a table with an identity column thetas too
small. If this occurs, follow the procedures for recovering from identity
gaps.
* I've heard (but not been able to reproduce) that identities jump
significantly when dumping and loading databases. Not confirmed.


NOTE: there are several other System 11 bugs related to Identities. EBF
7312 fixes BugId 97748, which caused duplicate identity values to be
inserted at times. EBF 6886 fixed (in addition to the above described bug)
an odd bug (#82460) which caused a server crash when bcping into a table w/
an identity added via alter table. As always, try to stay current on EBFs.

Back to start of 6.2.9

-------------------------------------------------------------------------------

Common questions about Identities

Is the Identity the equivalent of Oracle's auto-sequencing?:

Answer: More or less yes. Oracle's auto-sequencing feature is somewhat
transparent to the end user and automatically increments if created as a
primary key upon a row insert. The Sybase Identity column is normally specified
at table creation and thus is a functional column of the table. If however you
set the "auto identity" feature for a database, the tables created will have a
"hidden" identity column that doesn't even appear when you execute a select *
from [table]. See the Advantages of Identities for more details.

* How do I configure Identities?: You can either create your table initially
with the identity column:
1> create table ident_test
2> (text_field varchar(10),
3> ident_field numeric(5,0) identity)
4> go

Or alter an existing table and add an identity column:

1> alter table existing_table
2> add new_identity_field numeric(7,0) identity
3> go

When you alter a table and add an identity column, the System locks the
table while systematically incrementing and adding unique values to each
row. IF YOU DON'T SPECIFY a precision, Sybase defaults the size to 18!
Thats 1,000,000,000,000,000,000-1 possible values and some major major
problems if you ever crash your ASE and burn a default number of values...
(10^18 with the default burn factor will burn 5^14 or 500,000,000,000,000
values...yikes).


* How do I Configure the burn factor?: The number of identity values that
gets "burned" upon a crash or a shutdown can by found by logging into the
server and typing:
1> sp_configure "identity burning set factor"
2> go

the Default value set upon install is 5000. The number "5000" in this case
is read as ".05% of all the potential identity values you can have in this
particular case will be burned upon an unexpected shutdown." The actual
number depends on the size of the identity field as you specified it when
you created your table.

To set the burn factor, type:

1> sp_configure "identity burning set factor", [new value]
2> go

This is a static change; the server must be rebooted before it takes
effect.


* How do I tell which tables have identities?: You can tell if a table has
identities one of two ways:

1. sp_help [tablename]: there is a field included in the sp_help output
describing a table called "Identity." It is set to 1 for identity
fields, 0 otherwise.
2. Within a database, execute this query:
1> select object_name(id) "table",name "column", prec "precision"
2> from syscolumns
3> where convert(bit, (status & 0x80)) = 1
4> go

this will list all the tables and the field within the table that serves as
an identity, and the size of the identity field.


* What is my identity burn factor vulnerability right now?:
In other words, what would happen to my tables if I crashed my server right
now?

Identities are created type numeric, scale 0, and precision X. A precision
of 9 means the largest identity value the server will be able to process is
10^9-1, or 1,000,000,000-1, or 999,999,999. However, when it comes to
Burning identities, the server will burn (based on the default value of
5000) .05% of 1,000,000,000 or 500,000 values in the case of a crash. (You
may think an identity precision allowing for 1 Billion rows is optimistic,
but I once saw a precision set at 14...then the database crashed and their
identity values jumped 5 TRILLION. Needless to say they abandoned their
original design. Even worse, SQL server defaults precision to 18 if you
don't specify it upon table creation...that's a MINIMUM 10,000,000,000 jump
in identity values upon a crash with the absolute minimum burn factor)

Lets say you have inserted 5 rows into a table, and then you crash your
server and then insert 3 more rows. If you select all the values of your
identity field, it will look like this:
1> select identity_field from id_test
2> go
identity_field
--------------
1
2
3
4
5
500006
500007
500008

(8 rows affected)

Here's your Identity burning options (based on a precision of 10^9 as
above):

Burn value % of values # values burned during crash
5000 .05% 500,000
1000 .01% 100,000
100 .001% 10,000
10 .0001% 1,000
1 .00001% 100

So, the absolute lowest amount of numbers you'll burn, assuming you
configure the burn factor down to 1 (sp_configure "identity burning set
factor", 1) and a precision of 9, is 100 values.

Back to start of 6.2.9

---------------------------------------------------------------------------

Optimizing your Identity setup for performance and maintenance

If you've chosen to use Identities in your database, here are some
configuration tips to avoid typical Identity pitfalls:
+ Tune the burn factor!: see the vulnerability section for a discussion
on what happens to identity values upon ASE crashes. Large jumps in
values can crash front ends that aren't equipped to handle and process
numbers upwards of 10 Trillion. I've seen Powerbuilder applications
crash and/or not function properly when trying to display these large
identity values.
+ Run update statistics often on tables w/ identities: Any index with an
identity value as the first column in the search condition will have
its performance severely hampered if Update statistics is not run
frequently. Running a nightly update statistics/sp_recompile job is a
standard DBA task, and should be run often regardless of the existence
of identities in your tables.
+ Tune the "Identity Grab Size": ASE defaults the number of Identity
values it pre-fetches to one (1). This means that in high insert
environments the Server must constantly update its internal identity
placeholder structure before adding the row. By tuning this parameter
up:
1> sp_configure "identity grab size", [number]
2> go

You can prefetch larger numbers of values for each user as they log
into the server an insert rows. The downside of this is, if the user
doesn't use all of the prefetched block of identity values, the unused
values are lost (seeing as, if another user logs in the next block gets
assigned to him/her). This can quickly accelerate the depletion of
identity values and can cause gaps in Identity values.
(this feature is new with System 11)

+ Do NOT build business rules around Identity values. More generally
speaking the recommendation made by DBAs is, if your end users are EVER
going to see the identity field during the course of doing their job,
then DON'T use it. If your only use of the Identity field is for its
advertised purpose (that being solely to have a uniquely identifying
row for a table to index on) then you should be fine.
+ Do NOT build your clustered index on your Identity field, especially if
you're doing lots of inserts. This will create a hot spot of contention
at the point of insertion, and in heavier OLTP environments can be
debilitating.

- There is an excellent discussion in document http://www.sybase.com/
detail?id=860 on the performance and tuning aspects of Identities. It
supplements some of the information located here (Note: this will open in a
new browser window).

Back to start of 6.2.9

---------------------------------------------------------------------------

Recovery from Large Identity value gaps or
Recovery from Identity insert errors/Full Identity tables


This section will discuss how to re-order the identity values for a table
following a crash/abnormal shutdown that has resulted in huge gaps in the
values. The same procedure is used in cases where the identity field has
"filled up" and does not allow inserts anymore. Some applications that use
Identities are not truly candidates for this process (i.e., applications
that depend on the identity field for business purposes as opposed to
simple unique row identifiers). Applications like this that wish to rid
their dependence on identities will have to re-evaluate their database
design.
+ Method 1:bcp out and in:
- First, (from O/S command line):
% bcp database..table out [data_file] -Usa -S[server] -N

This will create a binary bcp datafile and will force the user to
create a .fmt file. The -N option tells the server to skip the identity
field while bcp'ing out.
- drop and recreate the table in question from ddl (make sure your
table ddl specifies the identity field).
- Now bcp back in:

% bcp database.table in [data_file -Usa -S[server] -f[fmt file] -N

The -N option during bcp in tells the server to ignore the data file's
placeholder column for the defined identity column.


Coincidentally, if you bcp out w/o the -N option, drop the table,
recreate from ddl specifying the identity field, and bcp back in w/o
the -N option, the same effect as above occurs.

(note: if you bcp out a table w/ identity values and then want to
preserve the identity values during the bcp back in, use the "-E"
option.)

+ Method 2: select into a new table, adding the identity column as you go
: Follow this process:
1> select [all columns except identity column]
2> [identity column name ] = identity(desired_precision)
3> into [new_table]
4> from [old table]
5> go
+ There are alternate methods that perform the above in multi steps, and
might be more appropriate in some situations.
o You can bcp out all the fields of a table except the identity
column (create the bcp format file from the original table, edit
out the identity column, and re-bcp). At this point you can create
a new table with or without the identity column; if you create it
with, as you bcp back in the Server will assign new identity
values. If you create it without, you can bcp back in normally and
then alter the table and add the identity later.
o You can select all columns but the identity into a new table, then
alter that table and add an identity later on.

Back to start of 6.2.9

---------------------------------------------------------------------------

How do I generate Sequential Keys w/o the Identity feature?


There are many reasons not to use the Identity feature of Sybase. This
section will present several alternative methods, along with their
advantages and drawbacks. The methods are presented in increasing order of
complexity. The most often implemented is Method 3, which is a more robust
version of Method 2 and which uses a surrogate-key storage table.

Throughout this section the test table I'm adding lines to and generating
sequential numbers for is table inserttest, created like this:

1> create table inserttest
2> (testtext varchar(25), counter int)
3> go
+ Method 1: Create your table with a column called counter of type int.
Then, each time you insert a row, do something like this:
1> begin tran
2> declare @nextkey int
3> select @nextkey=max(counter)+1 from inserttest holdlock
4> insert inserttest (testtext,counter) values ("test_text,@nextkey")
5> go


1> commit tran
2> go

This method is rather inefficient, as large tables will take minutes to
return a max(column) value, plus the entire table must be locked for
each insert (since the max() will perform a table scan). Further, the
select statement does not guarantee an exclusive lock when it executes
unless you have the "holdlock" option; so either duplicate values might
be inserted to your target table or you have massive deadlocking.


+ Method 2: See Question 10.1.1 of the comp.database.sybase FAQ is the
May 1994 (Volume 3, Number 2) Sybase Technical Note (these links will
open in a new browser window). Search down in the tech note for the
article titled, "How to Generate Sequential Keys for Table Key
Columns." This has a simplistic solution that is expanded upon in
Method 3.

+ Method 3: Create a holding table for keys in a common database: Here's
our central holding table.
1> create table keystorage
2> (tablename varchar(25),
4> lastkey int)
5> go

And initially populate it with the tablenames and last values inserted
(enter in a 0 for tables that are brand new).

1> insert into keystorage (tablename,lastkey)
2> select "inserttest", max(counter) from inserttest
3> go

Now, whenever you go to insert into your table, go through a process
like this:

1> begin tran
2> update keystorage set lastkey=lastkey+1 where tablename="inserttest"
3> go

1> declare @lastkey int
2> select @lastkey = lastkey from keystorage where tablename="inserttest"
3> insert inserttest (testtext,counter) values ("nextline",@lastkey)
4> go



1> commit tran
2> go

There is plenty of room for error checking with this process: for
example (code adapted from Colm O'Reilly (co...@mail.lk.blackbird.ie)
post to Sybase-L 6/20/97):

1> begin tran
2> update keystorage set lastkey=lastkey+1 where tablename="inserttest"
3> if @@rowcount=1
4> begin
5> declare @lastkey int
6> select @lastkey=lastkey from keystorage where tablename="inserttest"
7> end
8> commit tran
9> begin tran
10> if @lastkey is not null
11> begin
12> insert inserttest (testtext,counter) values ("third line",@lastkey)
13> end
14> commit tran
15> go

This provides a pretty failsafe method of guaranteeing the success of
the select statements involved in the process. You still have a couple
of implementation decisions though:
o One transaction or Two? The above example uses two transactions to
complete the task; one to update the keystorage and one to insert
the new data. Using two transactions reduces the amount of time the
lock is held on keystorage and thus is better for high insertion
applications. However, the two transaction method opens up the
possibility that the first transaction will commit and the second
will roll back, leaving a gap in the sequential numbers. (of
course, this gap is small potatoes compared to the gaps that occur
in Identity values). Using one transaction (deleting lines 8 and 9
in the SQL above) will guarantee absolutely no gaps in the values,
but will lock the keystorage table longer, reducing concurrency in
high insert applications.
o Update first or select first? The examples given generally update
the keystorage table first, THEN select the new value. Performing
the select first (you will have to rework the creation scheme
slightly; by selecting first you're actually getting the NEXT key
to add, where as by updating first, the keystorage table actually
holds the LAST key added) you allow the application to continue
processing while it waits for the update lock on the table.
However, performing the update first guarantees uniqueness (selects
are not exclusive).


Some DBAs experienced with this keystorage table method warn of large
amounts of blocking in high insert activity situations, a potential
drawback.


+ Method 4: Enhance the above method by creating an insert trigger on
your inserttest table that performs the next-key obtainment logic. Or
you could create an insert trigger on keystorage which updates the
table and obtains your value for you. Integrating the trigger logic to
your application might make this approach more complex. Also, because
of the nature of the trigger you'll have to define the sequence number
columns as allowing NULL values (a bad thing if you're depending on the
sequential number as your primary key). Plus, triggers will slow the
operation down because after obtaining the new value via trigger,
you'll have to issue an extra update command to insert the rest of your
table values.
+ Method 5: (Thanks to John Drevicky (jdre...@tca-techsys.com))
The following procedure is offered as another example of updating and
returning the Next Sequential Key, with an option that allows automatic
reuse of numbers......
-----------------------------------------------------------------
----
--
DECLARE @sql_err int, @sql_count int
--
begin tran
--
select @out_seq = 0
--
UPDATE NEXT_SEQUENCE
SET next_seq_id
= ( next_seq_id
* ( sign(1 + sign(max_seq_id - next_seq_id) ) -- evaluates: 0 [when
-- next > max]; else 1
* sign(max_seq_id - next_seq_id) -- evaluates: 0 [when next = max];
-- 1 [next < max];
-- -1 [next > max]
) -- both evaluate to 1 when next < max
) + 1 -- increment by [or restart at] 1
WHERE seq_type = @in_seq_type
--
select @sql_err = @@error, @sql_count = @@rowcount
--
IF @sql_err = 0 and @sql_count = 1
BEGIN
select @out_seq = next_seq_id
from NEXT_SEQUENCE
where seq_type = @in_seq_type
--
commit tran
return 0
END
ELSE
BEGIN
RAISERROR 44999 'Error %1! returned from proc derive_next_sequence...no update occurred', @sql_err
rollback tran
END
+ Other Methods: there are several other implementation alternatives
available that involve more complex logic but which might be good
solutions. One example has a central table that stores pre-inserted
sequential numbers that are deleted as they're inserted into the
production rows. This method allows the sequence numbers to be recycled
if their associated row is deleted from the production table. An
interesting solution was posted to Sybase-L 6/20/97 by Matt Townsend (
mto...@concentric.net) and is based on the millisecond field of the
date/time stamp. His solution guarantees uniqueness without any
surrogate tables or extra inserts/updates, and is a superior performing
solution to other methods described here (including Identities), but
cannot support exact sequential numbers. Some other solutions are
covered in a white paper available at Sybase's Technical library
discussing Sequential Keys (this will open in a new browser window).

Back to start of 6.2.9

---------------------------------------------------------------------------

Optimizing your home grown Sequential key generating process for any
version of Sybase

+ max_rows_per_page/fillfactor/table padding to simulate row level
locking: This is the most important tuning mechanism when creating a
hand -made sequence key generation scheme. Because of Sybase's page
level locking mechanism, your concurrency performance in higher-insert
activity situations could be destroyed unless the server only grabs one
row at a time. However since Sybase doesn't currently have row-level
locking, we simulate row-level locking by creating our tables in such a
way as to guarantee one row per 2048 byte page.
o For pre-System 11 servers; Calculate the size of your rows, then
create dummy fields in the table that get populated with junk but
which guarantee the size of the row will fill an entire page. For
example (code borrowed from Gary Meyer's 5/8/94 ISUG presentation (
gme...@netcom.com)):
1> create table keystorage
2> (tablename varchar(25),
3> lastkey int,
4> filler1 char(255) not null,
5> filler2 char(255) not null,
6> filler3 char(255) not null,
7> filler4 char(255) not null,
8> filler5 char(255) not null,
9> filler6 char(255) not null,
9> filler7 char(255) not null)
10> with fillfactor = 100
11> go

We use 7 char(255) fields to pad our small table. We also specify
the fillfactor create table option to be 100. A fillfactor of 100
tells the server to completely fill every data page. Now, during
your initial insertion of a line of data, do this:

1> insert into keystorage
2> (tablename,lastkey,
3> filler1,filler2,filler3,filler4,filler5,filler6,filler7)
4> values
5> ("yourtable",0,
6> replicate("x",250),replicate("x",250),
7> replicate("x",250),replicate("x",250),
8> replicate("x",250),replicate("x",250),
9> replicate("x",250))
10> go

This pads the row with 1750 bytes of junk, almost guaranteeing
that, given a row's byte size limit of 1962 bytes (a row cannot
span more than one page, thus the 2048 page size minus server
overhead == 1962), we will be able to simulate row level locking.

o In Sybase 11, a new create table option was introduced:
max_rows_per_page. It automates the manual procedures above and
guarantees at a system level what we need to achieve; one row per
page.
1> create table keystorage
2> (tablename varchar(25),
3> lastkey int)
4> with max_rows_per_page = 1
5> go
+ Create unique clustered indexes on the tablename/entity name within
your keystorage table. This can only improve its performance. Remember
to set max_rows_per_page or the fillfactor on your clustered index, as
clustered indexes physically reorder the data.
+ Break up the process into multiple transactions wherever possible; this
will reduce the amount of time any table lock is held and will increase
concurrency in high insertion environments.
+ Use Stored Procedures: Put the SQL commands that update the keystorage
table and then insert the updated key value into a stored procedure.
Stored procedures are generally faster than individual SQL statements
in your code because procedures are pre-compiled and have optimization
plans for index usage stored in Sybase's system tables.
+ Enhance the keystorage table to contain a fully qualified table name as
opposed to just the tablename. This can be done by adding fields to the
table definition or by just expanding the entity name varchar field
definition. Then place the keystorage table in a central location/
common database that applications share. This will eliminate multiple
keystorage tables but might add length to queries (since you have to do
cross-database queries to obtain the next key).

- There is an excellent discussion located in the whitepapers section
of Sybase's home page discussing the performance and tuning aspects of
any type of Sequential key use. It supplements the information here
(note: this page will open in a new browser window).

Back to start of 6.2.9

Back to top

-------------------------------------------------------------------------------

6.2.10: How can I execute dynamic SQL with ASE?

-------------------------------------------------------------------------------

Adaptive Server Enterprise: System 12

ASE 12 supports dynamic SQL, allowing the following:

declare @sqlstring varchar(255)
select @sqlstring = "select count(*) from master..sysobjects"
exec (@sqlstring)
go

Adaptive Server Enterprise: 11.5 and 11.9

There is a neat trick that was reported first by Bret Halford ( br...@sybase.com
). (If anyone knows better, point me to the proof and I will change this!) It
utilises the CIS features of Sybase ASE.

* Firstly define your local server to be a remote server using
sp_addserver LOCALSRV,sql_server[,INTERFACENAME]
go

* Enable CIS
sp_configure "enable cis",1
go

* Finally, use sp_remotesql, sending the sql to the server defined in point
1.
declare @sqlstring varchar(255)
select @sqlstring = "select count(*) from master..sysobjects"
sp_remotesql LOCALSRV,@sqlstring
go

Remember to ensure that all of the databases referred to in the SQL string are
fully qualified since the call to sp_remotesql places you back in your default
database.

Sybase ASE (4.9.x, 10.x and 11.x before 11.5)

Before System 11.5 there was no real way to execute dynamic SQL. Rob Verschoor
has some very neat ideas that fills some of the gaps (http://www.euronet.nl/
~syp_rob/dynsql.html).

Dynamic Stored Procedure Execution

With System 10, Sybase introduced the ability to execute a stored procedure
dynamically.

declare @sqlstring varchar(255)
select @sqlstring = "sp_who"
exec @sqlstring
go

For some reason Sybase chose never to document this feature.

Obviously all of this is talking about executing dynamic SQL within the server
itself ie stored procedures and triggers. Dynamic SQL within client apps is a
different matter altogether.

Back to top

-------------------------------------------------------------------------------

6.2.11: Is it possible to concatenate all the values from a column and return a
single row?

-------------------------------------------------------------------------------

Hey, this was quite cool I thought. It is now possible to concatenate a series
of strings to return a single column, in a sort of analogous manner to sum
summing all of the numbers in a column. Obviously, in versions before 12.5,
the longest string that you can have is 255 characters, but with very long
varchars, this may prove useful to someone.

Use a case statement, a la,

1> declare @string_var varchar(255)
2>
3> select @string_var = ""
4>
5> select @string_var = @string_var +
6> (case 1 when 1
7> then char_col
8> end)
9> from tbl_a
10>
11> print "%1!", @string_var
12> go
(1 row affected)
ABCDEFGH
(8 rows affected)
1> select * from tbl_a
2> go
char_col
--------
A
B
C
D
E
F
G
H

(8 rows affected)
1>

Back to top

-------------------------------------------------------------------------------

6.2.12: Selecting rows N to M without Oracle's rownum?

-------------------------------------------------------------------------------

Sybase does not have a direct equivalent to Oracle's rownum but its
functionality can be emulated in a lot of cases.

If you are simply trying to retrieve the first N rows of a table, then simple
use:

set rowcount

replacing <N> with your desired number of rows. (set rowcount 0 restores
normality.) If it is simply the last N rows, then use a descending order-by
clause in the select.

1> set rowcount
2> go
1> select foo
2> from bar
3> order by barID desc
4> go

If you are trying to retrieve rows 100 to 150, say, from a table in a given
order. You could use this to retrieve rows for a set of web pages, but there
are probably more efficient ways using cursors or well written queries or even
Sybperl! The general idea is select the rows into a temporary table adding an
identity column at the same time. Only select enough rows to do the job using
the rowcount trick. Finally, return the rows from the temporary table where
the identity column is between 100 and 150. Something like this:

set rowcount 150

select pseudo_key = identity(3),
col1,
col2
into #tempA
from masterTable
where clause...
order by 2,3

select col1,col2 from #tempA where pseudo_key between 100 and 150

Remember to reset rowcount back to 0 before issuing any more SQL or you will
only get back 150 rows!

A small optimisation would be to select only the key columns for the source
table together with the identity key. Once you have the set of rows you require
in the temporary table, join this back to the source using the key columns to
get any data that you require.

An alternative, which might be better if you needed to join back to this table
a lot, would be to insert enough rows to cover the range as before, but then
delete the set of unwanted rows. This would be a very efficient mechanism if
the majority of your queries involved the first few rows of a table. A typical
application for this might be a search engine displaying relevant items first.
The chances are that the user is going to be bored after the first couple of
pages and go back to playing 'Internet Doom'.

set rowcount 150

select col1,
col2
into #tempA
from masterTable
where clause...

set rowcount 100

delete #tempA

Sybase does not guarantee to return rows in any particular order, so the delete
may not delete the correct set of rows. In the above example, you should add an
order-by to the 'select' and build a clustered index on a suitable key in the
temporary table.

The following stored proc was posted to the Sybase-L mailing list and uses yet
another mechanism. You should check that it works as expected in your
environment since it relies on the fact a variable will be set using the last
row that is returned from a result set. This is not published behaviour and is
not guaranteed by Sybase.

CREATE PROCEDURE dbo.sp_get_posts
@perpage INT,
@pagenumber INT
WITH RECOMPILE
AS

-- if we're on the first page no need to go through the @postid push
IF @pagenumber = 1
BEGIN
SET ROWCOUNT @perpage

SELECT ...
RETURN
END

-- otherwise

DECLARE @min_postid NUMERIC( 8, 0 ),
@position INT

SELECT @position = @perpage * ( @pagenumber - 1 ) + 1

SET ROWCOUNT @position

-- What happens here is it will select through the rows
-- and order the whole set.
-- It will stop push postid into @min_postid until it hits
-- ROWCOUNT and does this out of the ordered set (a work
-- table).

SELECT @min_postid = postid
FROM post
WHERE ...
ORDER BY postid ASC

SET ROWCOUNT @perpage

-- we know where we want to go (say the 28th post in a set of 50).
SELECT ...
FROM post
WHERE postid >= @min_postid
...
ORDER BY postid ASC

Yet another solution would be to use a loop and a counter. Probably the least
elegant, but again, it would depend on what you were trying to do as to what
would be most appropriate.

As you can see, none of these are particularly pretty. If you know of a better
method, please forward it to do...@midsomer.org.

Back to top

-------------------------------------------------------------------------------

6.2.13: How can I return number of rows that are returned from a grouped query
without using a temporary table?

-------------------------------------------------------------------------------

This question is certainly not rocket science, but it is often nice to know how
many rows are returned as part of a group by. This might be for a report or a
web query, where you would want to tell the user how many rows were returned on
page one. It is easy using a temp table, but how to do it without a temp table
is a little harder. I liked this solution and thought that it might not be
obvious to everyone, it was certainly educational to me. Thanks go to Karl Jost
for a very nice answer.

So, give data like:

name item
---- ----
Brown 1
Smith 2
Brown 5
Jones 7

you wish to return a result set of the form:

name sum(item) rows
---- --------- ----
Brown 6 3
Jones 7 3
Smith 2 3

rather than

name sum(item) rows
---- --------- ----
Brown 6 2
Jones 7 1
Smith 2 1

Use the following, beguilingly simple query:

select name, sum(item), sum(sign(count(*)))
from data
group by name

Back to top

-------------------------------------------------------------------------------

Useful SQL Tricks SQL Fundamentals ASE FAQ

Useful SQL Tricks

6.3.1 How to feed the result set of one stored procedure into another.
6.3.2 Is it possible to do dynamic SQL before ASE 12?

Open Client SQL Advanced ASE FAQ

-------------------------------------------------------------------------------

Note: A number of the following tips require CIS to be enabled (at this precise
moment, all of them require CIS :-) The optimiser does take on a different
slant, however small, when CIS is enabled, so it is up to you to ensure that
things don't break when you do turn it on. Buyer beware. Test, test, test and
when you have done that, check some more.

-------------------------------------------------------------------------------

6.3.1: How to feed the result set of one stored procedure into another.

-------------------------------------------------------------------------------

I am sure that this is all documented, but it is worth adding here. It uses
CIS, as do a number of useful tricks. CIS is disabled by default before 12.0
and not available before 11.5. It is courtesy of BobW from
sybase.public.ase.general, full acceditation will be granted if I can find out
who he is. Excellent tip!

So, the scenario is that you have a stored procedure, AP_A, and you wish to use
the result set that it returns in a query.

Create a proxy table for SP_A.

create table proxy_SP_A (
a int,
b int,
c int,
_p1 int null,
_p2 int null
) external procedure
at "SELF.dbname.dbo.SP_A"

Columns a, b, c correspond to the result set of SP_A. Columns _p1, _p2
correspond to the @p1, @p2 parameters of SP_A. "SELF" is an alias put in
sysservers to refer back to the local server.

If you only have one row returned the proxy table can be used with the
following:

declare @a int, @b int, @c int
select @a = a, @b = b, @c = c from proxy_SP_B
where _p1 = 3 and _p2 = 5

More rows can be handled with a cursor.

Back to top

-------------------------------------------------------------------------------

6.3.2: Is it possible to do dynamic SQL before ASE 12?

-------------------------------------------------------------------------------

Again, using CIS, it is possible to fake dynamic SQL. Obviously for this to
work, CIS must be enabled. In addition, the local server must be added to
sysservers as a remote server. There is a stored procedure, sp_remotesql, that
takes as an arguments a remote server and a string, containing SQL.

As before, adding SELF as the 'dummy' server name pointing to the local server
as if it were a remote server, we can execute the following:

sp_remotesql "SELF","select * from sysdatabases"

Which will do just what you expect, running the query on the local machine. The
stored proc will take 251 (according to its own documentation) arguments of
char(255) or varchar(255) arguments, and concatenate them all together. So we
can do the following:
1> declare @p1 varchar(255),@p2 varchar(255),@p3 varchar(255), @p4 varchar(255)
2>
3> select @p1 = "select",
4> @p2 = " name ",
5> @p3 = "from ",
6> @p4 = "sysdatabases"
7>
8> exec sp_remotesql "SELF", @p1, @p2, @p3, @p4
9> go
(1 row affected)
name
------------------------------
bug_track
dbschema
master
model
sybsystemprocs
tempdb

(6 rows affected, return status = 0)

Obviously, when the parameters are concatenated, they must form a legal T-SQL
statement. If we remove one of the spaces from the above statement, then we
see:

1> declare @p1 varchar(255),@p2 varchar(255),@p3 varchar(255), @p4 varchar(255)
2>
3> select @p1 = "select",
4> @p2 = "name ",
5> @p3 = "from ",
6> @p4 = "sysdatabases"
7>
8> exec sp_remotesql "SELF", @p1, @p2, @p3, @p4
9> go
Msg 156, Level 15, State 1
, Line 1
Incorrect syntax near the keyword 'from'.
(1 row affected, return status = 156)

Back to top

-------------------------------------------------------------------------------

Open Client SQL Advanced ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:15 AM4/20/04
to
Archive-name: databases/sybase-faq/part17

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

9.1.10: SQL to determine space used for an index

-------------------------------------------------------------------------------

This one is not strictly a stored proc, but it has its uses.

Fundamentally, it is sp_spaceused reduced to bare essentials:

set nocount on
declare @objname varchar(30)
select @objname = "your table"

select index_name = i.name,
i.segment,
rowtotal = rowcnt(i.doampg),
reserved = reserved_pgs(i.id, i.doampg) +
reserved_pgs(i.id, i.ioampg),
data = data_pgs(i.id, i.doampg),
index_size = data_pgs(i.id, i.ioampg),
unused = (reserved_pgs(i.id, i.doampg) +
reserved_pgs(i.id, i.ioampg) -
(data_pgs(i.id, i.doampg) +
data_pgs(i.id, i.ioampg)))
into #space
from sysindexes i
where i.id = object_id(@objname)

You can analyse this in a number of ways:

1. This query should tally with sp_spaceused @objname:
select 'reserved KB' = sum(reserved) * 2,
'Data KB' = sum(data) * 2,
'Index KB' = sum(index_size) * 2,
'Unused KB' = sum(unused) * 2
from #space
2. This one reports space allocation by segment:
select 'segment name' = s.name,
'reserved KB' = sum(reserved) * 2,
'Data KB' = sum(data) * 2,
'Index KB' = sum(index_size) * 2,
'Unused KB' = sum(unused) * 2
from #space t,
syssegments s
where t.segment = s.segment
group by s.name
3. This one reports allocations by index:
select t.index_name,
s.name,
'reserved KB' = reserved * 2,
'Data KB' = data * 2,
'Index KB' = index_size * 2,
'Unused KB' = unused * 2
from #space t,
syssegments s
where t.segment = s.segment

If you leave out the where clause in the initial select into, you can analyse
across the whole database.

Hope this points you in the right direction.

Back to top

-------------------------------------------------------------------------------

9.1.11: sp_helpoptions - Shows what options are set for a database.

-------------------------------------------------------------------------------

Thanks again go to Bret Halford for some more sterling work. The following proc
will let you know some of options that are set within a database. The release
included is here has been tested to work on Solaris (11.9.2 and 12.0), but it
is likely that other platforms use and set @@options differently (endian issues
etc). As such, it is more of a sort of template for platforms other than
Solaris. Please feel free to expand it and send the modified proc back to me
and Bret.

Get it as part of the bundle (zip or tarball) or individually from here.

The output is as follows:

1> sp_helpoptions
2> go
showplan is off
ansinull is off
ansi_permissions is off
arithabort is on
arithignore is off
arithignore arith_overflow off
close on endtran is off
nocount is on
noexec is off
parseonly is off.
(return status = 0)

Back to top

-------------------------------------------------------------------------------

9.1.12: sp_days - returns days in a given month.

-------------------------------------------------------------------------------

Returns the number of days in a month. Modify to fit your needs, either
returning a result set (of 1 row) or set a variable, or both as this version
does.

Get it as part of the bundle (zip or tarball) or individually from here.

The output is as follows:

1> declare @days int
2> -- For November 1999
3> exec sp_days @days,11,99
4> go

---
30

(1 row affected)
(return status = 0)

Back to top

-------------------------------------------------------------------------------

9.1.13: sp__optdiag - optdiag from within isql.

-------------------------------------------------------------------------------

Versions of ASE: minimum of 11.5. I cannot test it on 11.5, so I do not know if
it works on that version. However, the procedure uses a 'case' statement, so
will certainly not work before 11.5. If anyone still has 11.5 running and can
let me know that it works, I would be grateful.

It seems little point in showing you what optdiag looks like, since it takes a
fair amount of space. This proc produces pretty much identical output.

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top

-------------------------------------------------------------------------------

9.1.14: sp_desc - a simple list of a tables' columns

-------------------------------------------------------------------------------

Stored proc to return a much simpler picture of a table that sp_help. sp_help
produces all of the information, too much in fact, and it always takes me a
couple of minutes to work out the various flags etc. I think that this is a
little easier to read and understand quickly.

1> sp_desc spt_values
2> go
spt_values
No. Column Name Datatype
----- ------------------------------ -------------------- --------
(1) name varchar(28)
(2) number int NOT NULL
(3) type char(2) NOT NULL
(4) low int
(5) high int
(6) msgnum int

(6 rows affected)
(return status = 0)
1>

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top

-------------------------------------------------------------------------------

9.1.15: sp_lockconfig - displays locking schemes for tables

-------------------------------------------------------------------------------
sp_lockconfig [sys_flag]
will list the server default locking scheme and lock promotion data (HWM, LWM,
and PCT) in priority order:

1. All table-specific lock configurations for the current database.
2. The database-wide lock configurations, if they exist.
3. The server-wide lock configurations.

A list of all tables will then be listed by locking scheme. For data-only
tables a suffix of "*" indicates that the table was originally created with a
clustered allpages configuration and was then altered to a data-only
configuration. (The reverse cannot be detected.) If sys_flag is non-null then
system tables will be included. Note that many system tables do not have a
defined locking scheme. (The implicit usage is allpages.)

1> sp_lockconfig
2> go

TYPE OBJECT LEVEL LOCK DATA
-------- ---------------------------- ----- ---------------------------------
Server - page PCT = 100, LWM = 200, HWM = 200
Server - row PCT = 100, LWM = 200, HWM = 200
Server default lock scheme - allpages

THERE ARE 4 USER TABLES WITH ALLPAGES LOCKING.

TABLE OWNER
------------------------------ ------------------------------
appkey8 dbo
appkey8_hist dbo
text_table7 TESTUSER2
with_types_table12 TESTUSER3

THERE ARE 2 USER TABLES WITH DATAPAGES LOCKING.

TABLE OWNER
------------------------------ ------------------------------
dol_test1 dbo
dol_test2 dbo

THERE ARE 2 USER TABLES WITH DATAROWS LOCKING.

TABLE OWNER
------------------------------ ------------------------------
dol_test10 dbo
dol_test11 dbo

(return status = 0)
1>

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top

-------------------------------------------------------------------------------

9.2.1: Generating dump/load database command.

-------------------------------------------------------------------------------

This shell script generates dump/load database commands from dump devices. I
cannot show the output because it seems to be broken, it is certainly a little
convoluted and is really pertinent to the pre-11 days, possibly even pre-10. It
is available as part of the archive code package.

What is really needed here is some automatic backup scripts. How is it going
Barb?

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top

-------------------------------------------------------------------------------

9.2.2: upd_stats.csh

-------------------------------------------------------------------------------

This is a script from Frank Lundy (mailto:flu...@verio.net) and does not
generate output, but does the updates directly. As such there is no output to
show you. It requires a program called sqlsa which you will need to modify to
suit your own server. You probably want to make the file unreadable by regular
users who have no need for any passwords contained within.

Get it as part of the bundle (zip or tarball) or individually from here.

Back to top

-------------------------------------------------------------------------------

9.3.1: SybPerl FAQ

Sybperl is a fantastic utility for DBAs and system administrators needing to
put together scripts to monitor and manage their installations as well as the
main way that web developers can gain access to data held in ASEs.

Sybperl now comes in a number of flavours, including a DBD version that is part
of the DBI/DBD suite. Michael has also written a package called Sybase::Simple
that sits on top of Sybperl that makes building such scripts a breeze.

Find out more and grab a copy from Michael Peppler's mpep...@peppler.org own
FAQ:

http://www.mbay.net/~mpeppler/Sybperl/sybperl-faq.html

Back to top

-------------------------------------------------------------------------------

9.3.2: dbschema.pl

-------------------------------------------------------------------------------

dbschema.pl is a script that will extract the schema (everything from the
server definition down to table permissions etc) from ASE/SQL Server. It was
initially developed by Michael Peppler but currently maintained by me (David
Owen do...@midsomer.org) The script is written using Sybperl and was
originally distributed solely as part of that package. The latest copy can be
got from ftp://ftp.midsomer.org/pub/dbschema.tgz.

Back to top

-------------------------------------------------------------------------------

9.3.3: ddl_insert.pl

-------------------------------------------------------------------------------

In order to use this script you must have Sybperl installed -- see Q9.3.1 for
more information.

This utility produces the insert statements to rebuild a table. Note that it
depends on the environment variable DSQUERY for the server selection. Also be
warned that the generated script truncates the destination table, which might
not be what you want. Other than that, it looks like an excellent addition to
the testing toolkit.

Get it as part of the bundle (zip or tarball) or individually from here.

[dowen@n-utsire code]$ ./ddl_insert.pl alrbprod sa myPassword h%
-- This script is created by ./ddl_insert.pl.
-- It would generate INSERT statements for tables whose names match the
-- following pattern:
/* ( 1 = 0
or name like 'h%'
)

*/

set nocount on
go


/*.............. hearing ...............*/
-- Sat Feb 17 13:24:09 MST 2001

declare @d datetime
select @d = getdate()
print ' %1! hearing', @d
go

truncate table hearing -- Lookout !!!!!!
go

insert hearing values('Dec 11 1985 12:00:00:000AM', 1, '1030', 2, '0930', 'Calgary, Alberta', NULL, NULL, '3408 Board Room', 3, NULL, '35', NULL)
...

Back to top

-------------------------------------------------------------------------------

9.3.4: int.pl

-------------------------------------------------------------------------------

Background

Please find included a copy of int.pl, the interfaces file conversion tool. It
should work with perl 4 and 5, but some perl distributions don't seem to
support gethostbyname which you need for the solaris, ncr, and vms file format.

You may need to adjust the first line to the path of perl on your system, and
may need to set the PERLLIB environment variable so that it finds the
getopts.pl module.

While it may not be 100% complete (e.g. it ignores the timeout field) you're
free to add any functionality you may need at your site.

int.pl -h will print the usage, typical invocation is
int.pl -f sun4-interfaces -o sol > interfaces.sol
Usage: int.pl -f
-o { sol|ncr|vms|nw386|os2|nt386|win3|dos|ntdoswin3 }
[-V] [-v] [-h]
where
-f input file to process
-o specify output mode
(e.g. sol, ncr, vms, nw386, os2, nt386, win3, dos, ntdoswin3)
-V turn on verbose mode
-v print version string
-h print this message

[The following are a couple of output examples, is any other utility ever
needed? Ed]

Get it as part of the bundle (zip or tarball) or individually from here.

The following interface file:

N_UTSIRE
master tcp ether n-utsire 4100
query tcp ether n-utsire 4100

N_UTSIRE_XP
master tcp ether n-utsire 4400
query tcp ether n-utsire 4400

N_UTSIRE_BS
master tcp ether n-utsire 4010
query tcp ether n-utsire 4010


becomes

[dowen@n-utsire code]$ ./int.pl -f $SYBASE/interfaces -o vms
N_UTSIRE
master tcp ether 192.168.1.1 4100
query tcp ether 192.168.1.1 4100
N_UTSIRE_XP
master tcp ether 192.168.1.1 4400
query tcp ether 192.168.1.1 4400
N_UTSIRE_BS
master tcp ether 192.168.1.1 4010
query tcp ether 192.168.1.1 4010
[dowen@n-utsire code]$
[dowen@n-utsire code]$ ./int.pl -f $SYBASE/interfaces -o sol
N_UTSIRE
master tli tcp /dev/tcp \x00021004c0a801010000000000000000
query tli tcp /dev/tcp \x00021004c0a801010000000000000000
N_UTSIRE_XP
master tli tcp /dev/tcp \x00021130c0a801010000000000000000
query tli tcp /dev/tcp \x00021130c0a801010000000000000000
N_UTSIRE_BS
master tli tcp /dev/tcp \x00020faac0a801010000000000000000
query tli tcp /dev/tcp \x00020faac0a801010000000000000000
[dowen@n-utsire code]$
[dowen@n-utsire code]$ ./int.pl -f $SYBASE/interfaces -o ncr
N_UTSIRE
master tli tcp /dev/tcp \x00021004c0a80101
query tli tcp /dev/tcp \x00021004c0a80101
N_UTSIRE_XP
master tli tcp /dev/tcp \x00021130c0a80101
query tli tcp /dev/tcp \x00021130c0a80101
N_UTSIRE_BS
master tli tcp /dev/tcp \x00020faac0a80101
query tli tcp /dev/tcp \x00020faac0a80101
[dowen@n-utsire code]$

Back to top

-------------------------------------------------------------------------------

9.3.5: Sybase::Xfer.pm

-------------------------------------------------------------------------------

The following is taken directly from the authors own documentation.

QUICK DESCRIPTION
Sybase::Xfer transfers data between two Sybase servers with multiple
options like specifying a where_clause, a smart auto_delete option and
can pump data from a perl subroutine or take a plain flat file. Has
option, similiar to default behaviour in Sybase::BCP, to capture failed
rows in a batch.

Also comes with a command line wrapper, sybxfer.

Also comes with a sister module Sybase::ObjectInfo.pm


DEPENDENCIES
Requires Perl Version 5.005 or beyond

Requires packages:
Sybase::DBlib
Getopt::Long
Tie::IxHash


SYNOPSIS
#from perl
#!/usr/bin/perl5.005
use Sybase::Xfer;
$h = new Sybase::Xfer( %options );
$h->xfer();
$h->done();

#from shell
#!/usr/ksh
sybxfer <options>


DESCRIPTION (a little bit from the pod)

If you're in an environment with multiple servers and you don't want
to use cross-server joins then this module may be worth a gander. It
transfers data from one server to another server row-by-row in memory
w/o using an intermediate file.

To juice things up it can take data from any set of sql commands as
long as the output of the sql matches the definition of the target
table. And it can take data from a perl subroutine if you're into
that.

It also has some smarts to delete rows in the target table before the
data is transferred by several methods. See the -truncate_flag,
-delete_flag and -auto_delete switches.

Everything is controlled by switch settings sent has a hash to the
module. In essence one describes the from source and the to source and
the module takes it from there.

Error handling:

An attempt was made to build in hooks for robust error reporting via
perl callbacks. By default, it will print to stderr the data, the
column names, and their datatypes upon error. This is especially
useful when sybase reports attempt to load an oversized row warning
message.


Auto delete:

More recently the code has been tweaked to handle the condition
where data is bcp'ed into a table but the row already exists and the
desired result to replace the row. Originally, the -delete_flag
option was meant for this condition. ie. clean out the table via the
-where_clause before the bcp in was to occur. If this is action is
too drastic, however, by using the -auto_delete option one can be
more precise and force only those rows about to be inserted to be
deleted before the bcp in begins. It will bcp the 'key' information
to a temp table, run a delete (in a loop so as not to blow any log
space) via a join between the temp table and target table and then
begin the bcp in. It's weird but in the right situation it may be
exactly what you want. Typically used to manually replicate a table.


CONTACTS
my e-mail: stephen...@msdw.com

Back to top

-------------------------------------------------------------------------------

9.3.6: Sybmon.pl

-------------------------------------------------------------------------------

Sybmon is a utility for interactive, realtime, monitoring of processes and
locks. It is a sort of "top" for Sybase. It requires both Sybperl and Perl/Tk
to be installed, both are available for most platforms, including Linux, NT,
Solaris.

Grab the tarball from ftp://ftp.midsomer.org/pub/sybmon.tar.gz or a zip'd one
from ftp://ftp.midsomer.org/pub/sybmon.zip.

There is also an NT binary for those people that are unable or just don't want
to install Perl. You can get that from ftp://ftp.midsomer.org/pub/
sybmon-i386.zip.

You can view a screenshot of the main process monitor from here (just to prove
that it runs on NT fine!!!!). A not very exciting server, doing not a lot!

A note of interest! To get the screenshot I used the latest copy of Activestate
Perl for NT and their Perl Package Manager (just type PPM from a DOS prompt
once Perl is installed) and had the 3 required packages (Tk, Sybperl, Sybase::
Login) installed in under 2 minutes!!!!

Back to top

-------------------------------------------------------------------------------

9.3.7: showserver.pl

-------------------------------------------------------------------------------

This small Perl script shows a list of what servers are running on the current
machine. Does a similar job to the showserver that comes with ASE, but looks
much nicer.

Get it as part of the bundle (zip or tarball) or individually from here.

bash-2.03$ ./showserver.pl

monserver's
-----------
CONCRETE Owner: sybase, Started: 14:25:51
Engine: 0 (PID: 520)
PORTLAND Owner: sybase, Started: 14:29:33
Engine: 0 (PID: 545)

dataserver's
------------
CONCRETE Owner: sybase, Started: 14:10:38
Engine: 1 (PID: 494)
Engine: 0 (PID: 493)
PORTLAND Owner: sybase, Started: 14:26:56
Engine: 0 (PID: 529)

backupserver's
--------------
CONCRETE_back Owner: sybase, Started: 14:25:25
Engine: 0 (PID: 515)
PORTLAND_back Owner: sybase, Started: 14:29:07
Engine: 0 (PID: 538)

Back to top

-------------------------------------------------------------------------------

9.3.8: Collection of Perl Scripts

-------------------------------------------------------------------------------

David Whitmarsh has put together a collection of scripts to help manage and
monitor ASEs. They can be grabbed individually or en masse from http://
sparkle-consultancy.co.uk/sybase/ .

Back to top

-------------------------------------------------------------------------------

9.4.1: Sybtcl FAQ

This is Tom Poindexter http://www.nyx.net/~tpoindex/ FAQ.

-------------------------------------------------------------------------------

Index of Sections

* Overview
* The enabling language platform
* Design and commands
* Applications
* Information Sources
* Download
* About the Author

-------------------------------------------------------------------------------

Overview

Sybtcl is an extension to Tcl (Tool Command Language) that allows Tcl programs
to access Sybase databases. Sybtcl adds additional Tcl commands to login to a
Sybase server, send SQL statements, retrieve result sets, execute stored
procedures, etc. Sybtcl simplifies Sybase programming by creating a high level
interface on top of DB-Library. Sybtcl can be used to program a wide variety of
applications, from system administration procedures to end-user applications.

Sybtcl runs on Unix, Windows NT and 95, and Macintosh platforms.

-------------------------------------------------------------------------------

The enabling language platform

Tool Command Language, often abbreviated "Tcl" and pronounced as "tickle", was
created by Dr. John Ousterhout at the University of California-Berkeley. Tcl is
an interpreted script language, similar to Unix shell, Awk, Perl, and others.
Tcl was designed to be easily extended, where new commands are added to the
base interpreter to provide additional functionality. Core Tcl commands contain
all of the usual constructs provided by most programming languages: setting and
accessing variables, file read/write, if-then-else, do-while, function calls.
Tcl also contains many productivity enhancing commands: list manipulation,
associative arrays, and regular expression processing.

Tcl has several features that make it a highly productive language. First, the
language is interpreted. Interpreters allow execution without a compile and
link step. Code can be developed with immediate feedback. Second, Tcl has a
single data type: string. While this might at first glance seem to a
deficiency, it avoids problems of data conversion and memory management. (This
feature doesn't preclude Tcl from performing arithmetic operations.) Last, Tcl
has a consistent and simple syntax, much the same as the Unix shell. Every Tcl
statement is a command name, followed by arguments.

Dr. Ousterhout also developed a companion Tcl extension, called Tk. Tk provides
simplified programming of X11 applications with a Motif look and feel. X11
applications can be programmed with 60%-80% less code than equivalent Xt,
Motif, or Xview programs using C or C++.

Dr. Ousterhout now leads Tcl/Tk development at Sun Microsystems.

-------------------------------------------------------------------------------

Design and commands

Sybtcl was designed to fill the gap between pure applications development tools
(e.g. Apt, Powerbuilder, et.al.) and database administration tools, often Unix
shell scripts consisting of 'isql' and Awk pipelines. Sybtcl extends the Tcl
language with specialized commands for Sybase access. Sybtcl consists of a set
of C language functions that interface DB-Library calls to the Tcl language.

Instead of a simple one-to-one interface to DB-Library, Sybtcl provides a
high-level Sybase programming interface of its own. The following example is a
complete Sybtcl program that illustrates the simplified interface. It relies on
the Tcl interpreter, "tclsh", that has been extended with Sybtcl.

#!/usr/local/bin/tclsh
set hand [sybconnect "mysybid" "mysybpasswd"]
sybuse $hand pubs2
sybsql $hand "select au_lname, au_fname from authors order by au_lname"
sybnext $hand {
puts [format "%s, %s" @1 @2]
}
sybclose $hand
exit

In this example, a Sybase server connection is established ("sybconnect"), and
the "pubs" sample database is accessed ("sybuse"). An SQL statement is sent to
the server ("sybsql"), and all rows returned are fetched and printed
("sybnext"). Finally, the connection is closed ("sybclose").

The same program can be made to display its output in an X11 window, with a few
changes. The Tcl/Tk windowing shell, "wish", also extended with Sybtcl is used.

#!/usr/local/bin/wish
listbox .sql_output
button .exit -text exit -command exit
pack .sql_output .exit
set hand [sybconnect "mysybid" "mysybpasswd"]
sybuse $hand pubs2
sybsql $hand "select au_lname, au_fname from authors order by au_lname"
sybnext $hand {
.sql_output insert end [format "%s, %s" @1 @2]
}
sybclose $hand

In addition to these commands, Sybtcl includes commands to access return column
names and datatypes ("sybcols"), return values from stored procedures
("sybretval"), reading and writing of "text" or "image" columns ("sybreadtext",
"sybwritetext"), canceling pending results ("sybcancel"), and polling
asynchronous SQL execution ("sybpoll").

Full access to Sybase server messages is also provided. Sybtcl maintains a Tcl
array variable which contains server messages, output from stored procedures
("print"), DB-Library and OS error message.

-------------------------------------------------------------------------------

Applications

The Sybtcl distribution includes "Wisqlite", an X11 SQL command processor.
Wisqlite provides a typical windowing style environment to enter and edit SQL
statements, list results of the SQL execution in a scrollable listbox, save or
print output. In addition, menu access to the Sybase data dictionary is
provided, listing tables in a database, the column names and datatypes of a
table, text of stored procedures and triggers.

For a snapshot of Wisqlite in action, look here.

Other applications included in the Sybtcl distribution include:

* a simple graphical performance monitor
* a version of "sp_who", with periodic refresh

Sybtcl users have reported a wide variety of applications written in Sybtcl,
ranging from end user applications to database administration utilities.

-------------------------------------------------------------------------------

Information Sources

Sybtcl is extensively documented in "Tcl/Tk Tools", edited by Mark Harrison,
published by O'Reilly and Associates, 1997, ISBN: 1-56592-218-2.

Tcl/Tk is described in detail in "Tcl and the Tk Toolkit" by Dr. John
Ousterhout, Addison-Wesley Publishing 1994 ISBN: 0-201-63337-X . Another recent
publication is "Practical Programming in Tcl and Tk" by Brent Welch, Prentice
Hall 1995 ISBN 0-13-182007-9.

A wealth of information on Tcl/Tk is available via Internet sources:

news:comp.lang.tcl
http://www.neosoft.com/tcl/
http://www.sco.com/Technology/tcl/Tcl.html
ftp://ftp.neosoft.com/pub/tcl/

-------------------------------------------------------------------------------

Download

Download Sybtcl in tar.gz format for Unix.
Download Sybtcl in zip format for Windows NT and 95.

Tcl/Tk and Sybtcl are both released in source code form under a "BSD" style
license. Tcl/Tk and Sybtcl may be freely used for any purpose, as long as
copyright credit is given to the respective owners. Tcl/Tk can be obtained from
either anonymous FTP site listed above.

Tcl/Tk and Sybtcl can be easily configured under most modern Unix systems
including SunOS, Solaris, HP-UX, Irix, OSF/1, AIX, SCO, et.al. Sybtcl also runs
under Windows NT and 95; pre-compiled DLL's are include in the distribution.
Sybtcl requires Sybase's DB-Library, from Sybase's Open Client bundle.

Current versions are:

* Sybtcl 2.5: released January 8, 1998
* Tcl 8.0: released August 13, 1997
* Tk 8.0: released August 13, 1997

The Internet newsgroup comp.lang.tcl is the focal point for support. The group
is regularly read by developers and users alike. Authors may also be reached
via email. Sun has committed to keeping Tcl/Tk as freely available software.

-------------------------------------------------------------------------------

About the Author

Tom Poindexter is a consultant with expertise in Unix, relational databases,
systems and application programming. He holds a B.S. degree from the University
of Missouri, and an M.B.A. degree from Illinois State University. He can be
reached at tpoi...@nyx.net.

Back to top

-------------------------------------------------------------------------------

9.4.2: sybdump

-------------------------------------------------------------------------------

Sybdump is a Tcl script written by De Clarke (d...@ucolick.org) for extracting a
database schema. Look in

ftp://ftp.ucolick.org/pub/src/UCODB

for sybdump.tar or sybdump.tar.gz.

Back to top

-------------------------------------------------------------------------------

9.4.3: wisql

-------------------------------------------------------------------------------

Another Sybtcl package maintained by De Clarke (d...@ucolick.org) this one is a
graphical replacement for isql. Correct me if I am wrong, but I think that this
started life as wisqlite and was included as part of the Sybtcl package and was
then updated by De and became wisql.

You can grab a copy of wisql from ftp://ftp.ucolick.org:/pub/UCODB/
wisql5B.tar.gz

Back to top

-------------------------------------------------------------------------------

9.5.1: Sybase Module for Python

-------------------------------------------------------------------------------

Dave Cole has a module for Python that allows connectivity to Sybase in an
analagous way to Sybperl or Sybtcl. You can find details from http://
www.object-craft.com.au/projects/sybase/.

Back to top

-------------------------------------------------------------------------------

9.6.1: SQSH, SQshelL

SQSH is a direct replacement for isql with a million more bells and whistles.
In fact, the title gives it away, since SQSH is a pretty seemless marriage of
sh(1)
and isql.

There has been a webified copy of the SQSH FAQ based on the 1.4 release
contained within these pages for a while, but it is considerably behind the
times. As such, I have moved the 1.4 release to a separate file readable from
here.

The current SQSH FAQ can be seen on Scott's own site, http://www.voicenet.com/
~gray/FAQ.html.

-------------------------------------------------------------------------------

Back to top

-------------------------------------------------------------------------------

9.6.2: NTQuery.exe

-------------------------------------------------------------------------------

Brief

ntquery.exe is a 32-bit application allowing a lightweight, but robust sybase
access environment for win95/NT. It has a split window - the top for queries,
the bottom for results and error/message handler responses, which are processed
in-line. Think of it as isql for windows - a better (reliable) version of wisql
(with sensible error handling). Because its simple it can be used against
rep-server (I've also used it against Navigation Server(R.I.P.))

Requirements: open client/dblib (Tested with 10.x up to 11.1.1)

It picks up the server list from %SYBASE%\ini\sql.ini and you can add
DSQUERY,SYBUSER and SYBPASS variables in your user variables to set default
server,username and password values.

Instructions

To connect: SQL->CONNECT (only one connection at a time, but you can run
multiple ntquery copies) Enter query in top window and hit F3 (or SQL->Execute
Query if you must use the mouse) Results/Messages/Errors appear in bottom
window

A script can be loaded into the top window via File->Open Either sql or results
can be saved with File->Save - it depends which window your focus is on.

Theres a buffer limit of 2mb

Get it here

ntquery.zip [22K]

Back to top

-------------------------------------------------------------------------------

9.6.3: BCPTool - A utility for Transferring Data from one ASE to Another.

-------------------------------------------------------------------------------

BCPTool is a GUI utility written by Anthony Mandic that moves data from one ASE
to another. It runs on Solaris and Linux and is very straightforward to use.

Go to http://www.mbay.net/~mpeppler/bcptool to grab a copy, read the
documentation and see a couple of screen shots.

Hot news! Michael Peppler is porting BCPtool to use the GTK+ libraries, which
is basically the standard gnome toolkit for Linux. Go to Michael's site for
more details (http://www.mbay.net/~mpeppler).

Back to top

-------------------------------------------------------------------------------

9.7.1: How to access a SQL Server using Linux

-------------------------------------------------------------------------------

I am planning to remove/reduce/rewrite this section when the ASE on Linux FAQ
moves to section 2. Most of it is out of date, and I think that most of its
links are broken.

Some time back, Sybase released a binary distribution of ctlib for Linux. This
is just the header and libraries files for ctlib only, not dblib, not isql, not
bcp, not the dataserver and not the OpenServer. This was done as a skunk works
internal project at Sybase, for the good of the Linux community, and not
supported by Sybase in any official capacity. This version of ctlib identifies
itself as 10.0.3.

At the time, the binary format for Linux libraries was a format called a.out.
Since then, the format has changed to the newer, ELF format. ELF libraries and
.o files cannot be linked with a.out libraries and .o files. Fortunately, a.out
libraries and .o files can easily be converted to ELF via the objdump(1)
program.

Getting a useable ctlib for Linux isn't that easy, though. Another
compatibility problem has arisen since these old libraries were compiled. The
byte-order for the ctype macros has changed. One can link to the
(converted-to-ELF) ctlib, but running the resulting executable will result in
an error message having to do with missing localization files. The problem is
that the ctype macros in the compiled ctlib libraries are accessing a structure
in the shared C library which has changed its byte order.

I've converted the a.out library, as distributed by Sybase to ELF, and added
the old tables directly to the library, so that it won't find the wrong ones in
libc.

Using this library, I can link and run programs on my Linux machines against
Sybase databases (It also can run some programs against Microsoft SQL server,
but that's another FAQ). However, you must be running Linux 2.0 or later, or
else the link phase will core dump.

This library is available for ftp at:

* ftp://mudshark.sunquest.com/pub/ctlib-linux-elf/sybperl.tar.gz
* ftp://mudshark.sunquest.com/pub/ctlib-linux-elf/ctlib-linux-elf.tgz

is a compiled version of sybperl 2.0, which is built with the above library.
Obviously, only the ctlib module is in this distribution.

In order to use this code, you will need a Sybase dataserver, a Sybase
interfaces file (in the non-TLI format -- see Q9.3.4), a user named sybase in
your /etc/passwd file, whose home directory is the root of the distribution,
and some application code to link to.

As far as an isql replacement goes, use sqsh - Q9.5.1.

One of the libraries in the usual Sybase distribution is a libtcl.a This
conflicts with the library on Linux which implements the TCL scripting
language, so this distribution names that library libsybtcl.a, which might
cause some porting confusion.

The above conflict problem is addressed by SybPerl - Q9.3.1 and sqsh -
Q9.5.1

More information

See Q11.4.6 for more information on setting up DBI/DBD:Sybase

Back to top

-------------------------------------------------------------------------------

9.7.2: Sybase on Linux FAQ

-------------------------------------------------------------------------------

I am planning to move this section out of here next release.

Sybase have released two versions of Sybase on Linux, 11.0.3.3 and 11.9.2, and
a third, 12.5, is in beta testing at this moment, slated for GA sometime in the
first half of 2001.

11.9.2

This is officially supported and sanctioned. The supported version can be
purchased from Sybase at similar, if not exactly the same, conditions as 11.9.2
on NT, with one small exception: you can download a developer's version for
free! There is a 11.9.2.2 EBF, although I am not 100% sure if the current
developer's release is 11.9.2 or 11.9.2.2. Certainly for a while, you could
only get the EBF if you had a paid for version.

11.0.3.3

Please remember that Sybase Inc does not provide any official support for
SQL Server on Linux (ie the 11.0.3.3 release). The folks on the 'net
provide the support.

Index

* Minimum Requirements
* How to report a bug
* Bug list

Minimum Requirements

* Linux release: 2.0.36 or 2.1.122 or greater.

How to report a bug

I hope you understand that the Sybase employee who did the port is a very busy
person so it's best not to send him mail regarding trivial issues. If you have
tried posting to comp.databases.sybase and ase-lin...@isug.com and have
checked the bugs list, send him an e-mail note with the following data - you
will not get an acknowledgement to your e-mail and it will go directly into the
bug tracking database; true bugs will be fixed in the next release; any message
without the above Subject will be deleted, unseen, by a filter.

Administrator: I know that the above sounds harsh but Wim ten has been
launched to world-wide exposure. In order for him to continue to provide
Sybase ASE outside of his normal workload we all have to support him.
Thanks!

With the above out of the way, if you find a bug or an issue please report it
as follows:

To: wten...@sybase.com
Subject: SYBASE ASE LINUX PR
uname: the result of typing 'uname -a' in a shell
$SYBASE/scripts/hw_info.sh: As 'sybase' run this shell script and enclose
its output
short description: a one to two line description of the problem
repeatable: yes, you can repeat it, no you cannot
version of dataserver: the result of: as the 'sybase' user, 'cd $SYBASE/bin
' and type './dataserver -v|head -1'
test case: test case to reproduce the problem

Bug List
+-----------------------------------------------------------------------------+
| | | | | | |
|-------------+--------+--------------+-------------+-------------+-----------|
| Short | Fixed? | Dataserver | Date | Fix Date | Fix Notes |
| Description | | Release | Reported | | |
|-------------+--------+--------------+-------------+-------------+-----------|
| | | SQL Server/ | | | You must |
| | | 11.0.3.3/P/ | | | upgrade |
| Remote | | Linux Intel/ | Pre-release | Pre-release | your OS |
| connections | Yes | Linux 2.0.36 | of SQL | of SQL | to either |
| hang | | i586/1/OPT/ | Server | Server | 2.0.36 or |
| | | Thu Sep 10 | | | 2.1.122 |
| | | 13:42:44 | | | or |
| | | CEST 1998 | | | greater |
+-----------------------------------------------------------------------------+

as of Fri Nov 20 20:16 (08:16:47 PM) MST 1998

Back to top

-------------------------------------------------------------------------------

9.7.3: Linux Shared Memory for ASE (x86 Processors)

-------------------------------------------------------------------------------

2.2.x Series Kernels and Above

To set the maximum shared memory to 128M use the following:

# echo 134217728 > /proc/sys/kernel/shmmax

This comes from the following calculation: 128Mb = 128 x 1024 x 1024 bytes =
134217728 bytes

2.0.x and 2.1.x Kernels

To increase the total memory for ASE (SQL Server) beyond 32mb, several kernel
parameters must be changed.

1. Determine Memory/System Requirements
+ a: Total Memory < 128mb specific instructions
+ b: Total Memory > 128MB - specific instructions
2. Modify the linux/include/asm/shmparam.h to setup shared memory
3. Increase the size of the swap
4. Recompile your kernel & start using the new kernel
5. Verify the changes have taken effect
6. Increase the total memory to the desired size

Comments

-------------------------------------------------------------------------------

1a - Total Memory < 128mb specific instructions

-------------------------------------------------------------------------------

Requirements:

Linux 2.0.36 or higher

Total memory is currently limited to 128mb. A request to the Linux kernel
developers has been made to enable large swap support which will allow the same
size as 2.2.x kernels.

-------------------------------------------------------------------------------

1b - Total Memory > 128mb - specific instructions

-------------------------------------------------------------------------------

Requirements:

* Linux Kernel 2.2.x or higher *
* util-linux package 2.9 or higher *
* Swap space atleast as large as the SQL Server


* - both are available from ftp://ftp.us.kernel.org

You need to make the following changes in linux/include/asm-i386/page.h:

- #define __PAGE_OFFSET (0xC0000000)
+ #define __PAGE_OFFSET (0x80000000)

This allows accessing up to 2gb of memory. Default is 960mb.

-------------------------------------------------------------------------------

Step 2: Modify the linux/include/asm/shmparam.h to setup shared memory

-------------------------------------------------------------------------------


[max seg size]
- #define SHMMAX 0x2000000 /* defaults to 32 MByte */
+ #define SHMMAX 0x7FFFE000 /* 2048mb - 8k */

[max number of segments]
- #define _SHM_ID_BITS 7 /* maximum of 128 segments */
+ #define _SHM_ID_BITS 5 /* maximum of 32 segments */

[number of bits to count how many pages in the shm segment]
- #define _SHM_IDX_BITS 15 /* maximum 32768 pages/segment */
+ #define _SHM_IDX_BITS 19 /* maximum 524288 pages/segment */

Alter _SHM_IDX_BITS only if you like to go beyond the default 128MByte
where you also need the swap space available.

_SHM_ID_BITS + _SHM_IDX_BITS must be equal to or less then 24.

Linux kernel PAGE size for Intel x86 machines = 4k

-------------------------------------------------------------------------------

Step 3: To increase the size of swap

-------------------------------------------------------------------------------


$ mkswap -c <device> [size] <- use for pre 2.2 kernels
- limited to 128mb - 8k

$ mkswap -c -v1 <device> [size] <- limited to 2gb 8k

$ swapon <device>

* Add the following to your /etc/fstab to enable this swap on boot

<device> swap swap defaults 0 0

-------------------------------------------------------------------------------

Step 4: Recompile your kernel & restart using the new kernel

-------------------------------------------------------------------------------


Follow the instructions provided with the Linux Kernel

-------------------------------------------------------------------------------

Step 5: Verify the changes have taken effect

-------------------------------------------------------------------------------


$ ipcs -lm

------ Shared Memory Limits --------
max number of segments = 32
max seg size (kbytes) = 2097144
max total shared memory (kbytes) = 67108864
min seg size (bytes) = 1

[jfroebe@jfroebe-desktop asm]$

The changes took.

-------------------------------------------------------------------------------

Step 6: Increase the total memory to the desired size

-------------------------------------------------------------------------------


Because of current limitations in the GNU C Library (glibc), ASE is limited
to 893mb. A workaround to increase this to 1400mb has been submitted.

Increase the total memory to desired size. Remember the above limitation as
well as the 128mb limitation on Linux kernel 2.0.36.

For example, to increase the total memory to 500mb:

1> sp_configure "total memory", 256000


2> go
1> shutdown
2> go

-------------------------------------------------------------------------------

Comments

* Note that it is possible to increase the total memory far above the physical
RAM

Back to top

-------------------------------------------------------------------------------

9.7.4: Sybase now available on Free BSD

-------------------------------------------------------------------------------

Amazing, the Sybase folks have got ASE running on FreeBSD! The following post
is from Reinoud van Leeuwen (rein...@n.leeuwen.net). His web site is http://
www.xs4all.nl/~reinoud and contains lots of other useful stuff.

Sybase has made an update of their free 11.0.3.3 SQL server available. This
updated version includes some bug fixes and *FreeBSD support*.

The 11.0.3.3 version is unsupported, but Free for development *and production*!

The server still runs under the Linux emulation, but there is a native SDK
(libraries).

download on

http://www.sybase.com/linux/ase/

some extra info on:

http://my.sybase.com/detail?id=1009270

Here are the notes I made to get everything working (still working on things
like sybperl, dbd::sybase and PHP :-)

notes on getting Sybase to work on FreeBSD 4.0 RELEASE
======================================================

(log in as root)

1: create a user sybase. give it /usr/local/sybase as home directory. I gave
him bash as shell and put him in the group sybase

2: put the following files in /usr/local (they contain the path sybase):

* sybase-ase-11.0.3.3-FreeBSD-6.i386.tgz
* sybase-doc-11.0.3.3-FreeBSD-6.i386.tgz
* sybase-ocsd-10.0.4-FreeBSD-6.i386.tgz

3: untar them:

tar xvzf sybase-ase-11.0.3.3-FreeBSD-6.i386.tgz
tar xvzf sybase-doc-11.0.3.3-FreeBSD-6.i386.tgz
tar xvzf sybase-ocsd-10.0.4-FreeBSD-6.i386.tgz
rm sybase*.tgz

4: change the ownership of the tree to sybase:

chown -R sybase:sybase /usr/local/sybase

5: install the FreeBSD linux emulation:

* add the following line to /etc/rc.conf
linux_enable="YES"
* build the following ports:
/usr/ports/emulators/linux_base

(TIP: move the nluug site up in the makefile, this speeds up things
considerably from the Netherlands!)

6: build a kernel that supports System V shared memory blocks make sure that
the following lines are in the kernel config file (/sys/i386/conf/YOUR_KERNEL)

# the next 3 are now standard in the kernel
options SYSVSHM
options SYSVMSG
options SYSVSEM

options SHMMAXPGS="8192"
options SHMMAX="(SHMMAXPGS*PAGE_SIZE+1)"

(this might be a good time to also enable your kernel for Multi processor) It
is also possible to set the last two entries during runtime:

sysctl -w kern.ipc.shmmax=32000000
sysctl -w kern.ipc.shmall=8192

(log in as sybase or su to it; make sure that the SYBASE environment variable
is set to /usr/local/sybase ; the .cshrc file should set it.)

7: brand some executables to make sure FreeBSD knows that they are Linux ones

brandelf -t Linux /usr/local/sybase/install/sybinit
brandelf -t Linux /usr/local/sybase/install/startserver
brandelf -t Linux /usr/local/sybase/bin/*

8: run ./install/sybinit

With this program you should be able to install a sybase server and a backup
server. (see the included docs or the online manuals on http://
sybooks.sybase.com)

9: To make Sybase start during system boot copy the following script to /usr/
local/etc/rc.d and make it executable by root

#!/bin/sh
# start all sybase servers on this system
# assume that sybase is installed in the home dir of user
# sybase
export SYBASE=`grep -e "^sybase" /etc/passwd | cut -d: -f 6`
export PATH="${SYBASE}/bin:${SYBASE}/install:${PATH}"

unset LANG
unset LC_ALL

cd ${SYBASE}/install

for RUN_SERVER in RUN_*
do
su sybase -c "startserver -f ${RUN_SERVER}" > /dev/null 2>&1
echo -n "${RUN_SERVER} "
done
echo

# end of script

Getting 2 CPU's working
=======================

Two get Sybase running on 2 CPU's involves two steps:

* getting Unix working on 2 CPU's and
* configuring Sybase to use them.

1: Getting FreeBSD to work on 2 CPU's.

Build a new kernel that supports 2 CPU's. Run the command mptable (as root).
note the last few lines of output, they will tell you what you should include
in your kernel file.

Edit the Kernel file and build it. Note the messages during the next reboot. It
should say somewhere that it uses the second CPU now.

2: insert the following line in the sybase.sh startup script in /usr/local/etc/
rc.d

export SRV_CPUCOUNT=2

Also insert this line in the files where environment variables are set for the
user sybase. Edit the config file for the sybase server(s) on your system (/usr
/local/sybase/<SERVERNAME>.cfg). Change the values in the line "max online
engines" from "Default" to "2". (Another option is to give the SQL command
sp_configure "max online engines",2) During the next Sybase reboot, the last
line in the errorlog should say something like:

engine 1, os pid xxx online

there should be two processes with the name dataserver now.


Back to top
-------------------------------------------------------------------------------

9.8.1: Other Extended Stored Procedures

-------------------------------------------------------------------------------

The following stored procedures were written by Ed Barlow sql...@tiac.net and
can be fetched from the following site:

http://www.edbarlow.com

Here's a pseudo-man page of what you get:

Modified Sybase Procedures
+-------------------------------------------------------------+
| | |
|---------------+---------------------------------------------|
| Command | Description |
|---------------+---------------------------------------------|
|---------------+---------------------------------------------|
|sp__help |Better sp_help |
|---------------+---------------------------------------------|
|sp__helpdb |Database Information |
|---------------+---------------------------------------------|
|sp__helpdevice |Break down database devices into a nice |
| |report |
|---------------+---------------------------------------------|
|sp__helpgroup |List groups in database by access level |
|---------------+---------------------------------------------|
|sp__helpindex |Shows indexes by table |
|---------------+---------------------------------------------|
|sp__helpsegment|Segment Information |
|---------------+---------------------------------------------|
|sp__helpuser |Lists users in current database by group |
| |(include aliases) |
|---------------+---------------------------------------------|
|sp__lock |Lock information |
|---------------+---------------------------------------------|
|sp__who |sp_who that fits on a page |
+-------------------------------------------------------------+
Audit Procedures
+-------------------------------------------------------------+
| | |
|-----------------+-------------------------------------------|
| Command | Description |
|-----------------+-------------------------------------------|
|sp__auditsecurity|Security Audit On Server |
|-----------------+-------------------------------------------|
|sp__auditdb |Audit Current Database For Potential |
| |Problems |
+-------------------------------------------------------------+
System Administrator Procedures
+-------------------------------------------------------------+
| | |
|--------------+----------------------------------------------|
| Command | Description |
|--------------+----------------------------------------------|
|--------------+----------------------------------------------|
|sp__block |Blocking processes. |
|--------------+----------------------------------------------|
|sp__dbspace |Summary of current database space information.|
|--------------+----------------------------------------------|
|sp__dumpdevice|Listing of Dump devices |
|--------------+----------------------------------------------|
|sp__helpdbdev |Show how Databases use Devices |
|--------------+----------------------------------------------|
|sp__helplogin |Show logins and remote logins to server |
|--------------+----------------------------------------------|
|sp__helpmirror|Shows mirror information, discover broken |
| |mirrors |
|--------------+----------------------------------------------|
|sp__segment |Segment Information |
|--------------+----------------------------------------------|
|sp__server |Server summary report (very useful) |
|--------------+----------------------------------------------|
|sp__vdevno |Who's who in the device world |
+-------------------------------------------------------------+
DBA Procedures
+-------------------------------------------------------------+
| | |
|---------------+---------------------------------------------|
| Command | Description |
|---------------+---------------------------------------------|
|---------------+---------------------------------------------|
|sp__badindex |give information about bad indexes (nulls, |
| |bad statistics...) |
|---------------+---------------------------------------------|
|sp__collist |list all columns in database |
|---------------+---------------------------------------------|
|sp__indexspace |Space used by indexes in database |
|---------------+---------------------------------------------|
|sp__noindex |list of tables without indexes. |
|---------------+---------------------------------------------|
|sp__helpcolumns|show columns for given table |
|---------------+---------------------------------------------|
|sp__helpdefault|list defaults (part of objectlist) |
|---------------+---------------------------------------------|
|sp__helpobject |list objects |
|---------------+---------------------------------------------|
|sp__helpproc |list procs (part of objectlist) |
|---------------+---------------------------------------------|
|sp__helprule |list rules (part of objectlist) |
|---------------+---------------------------------------------|
|sp__helptable |list tables (part of objectlist) |
|---------------+---------------------------------------------|
|sp__helptrigger|list triggers (part of objectlist) |
|---------------+---------------------------------------------|
|sp__helpview |list views (part of objectlist) |
|---------------+---------------------------------------------|
|sp__trigger |Useful synopsis report of current database |
| |trigger schema |
+-------------------------------------------------------------+
Reverse Engineering
+-------------------------------------------------------------+
| | |
|-----------------+-------------------------------------------|
| Command | Description |
|-----------------+-------------------------------------------|
|-----------------+-------------------------------------------|
|sp__revalias |get alias script for current db |
|-----------------+-------------------------------------------|
|sp__revdb |get db creation script for server |
|-----------------+-------------------------------------------|
|sp__revdevice |get device creation script |
|-----------------+-------------------------------------------|
|sp__revgroup |get group script for current db |
|-----------------+-------------------------------------------|
|sp__revindex |get indexes script for current db |
|-----------------+-------------------------------------------|
|sp__revlogin |get logins script for server |
|-----------------+-------------------------------------------|
|sp__revmirror |get mirroring script for server |
|-----------------+-------------------------------------------|
|sp__revuser |get user script for current db |
+-------------------------------------------------------------+
Other Procedures
+-------------------------------------------------------------+
| | |
|---------------+---------------------------------------------|
| Command | Description |
|---------------+---------------------------------------------|
|---------------+---------------------------------------------|
|sp__bcp |Create unix script to bcp in/out database |
|---------------+---------------------------------------------|
|sp__date |Who can remember all the date styles? |
|---------------+---------------------------------------------|
|sp__quickstats |Quick dump of server summary information |
+-------------------------------------------------------------+

Back to top

-------------------------------------------------------------------------------

9.8.3: xsybmon

-------------------------------------------------------------------------------

The original site, NSCU, no longer carries these bits. If you feel that it's
useful to have xsybmon and you know where the new bits are, please drop me an
e-mail: do...@midsomer.org

There is an alternative that is include as part of De Clarke's wisql package.
It is called syperf. I do not have any screen shots, but I will work on it. You
can grab a copy of wisql from ftp://ftp.ucolick.org:/pub/UCODB/wisql5B.tar.gz

Back to top

-------------------------------------------------------------------------------

David Owen

unread,
Apr 20, 2004, 9:45:18 AM4/20/04
to
Archive-name: databases/sybase-faq/part19

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Additional Information

next prev ASE FAQ

-------------------------------------------------------------------------------

Power Sites

* Rob Verschoor's site (http://www.sypron.nl) is packed with useful
information about Sybase ASE and replication, as well as a couple of
quick-reference guides on steroids for both. Grab the ASE one from http://
www.sypron.nl/ase_qref.html and the replication one from http://
www.sypron.nl/rs_qref.html.
* Eb Barlow keeps about the most complete set of references to freeware/
public domain/shareware available for Sybase. Check out his site at http:/
/www.edbarlow.com.

Useful Documentation

* The unauthorized documentation of DBCC by Al Huntley - http://user.icx.net/
~huntley/dbccinfo.htm
* More DBCC's by KaleidaTech Associates, Inc. - http://www.kaleidatech.com/
dbcc1.htm
* Anthony Mandic's Installing Sybase on Solaris - http://www.mbay.net/
~mpeppler/sos/sos.html
* John Knox has a good paper on the contents of the interfaces file at http:/
/www.outlands.demon.co.uk

Sybase Resources

* Pacific Rim Network Systems Inc Sybase Resource Links http://www.alaska.net
/~pacrim/sybase.html
* SQL Server and Rep Server on NT http://www.xs4all.nl/~reinoud/
ntsqlrep.html
* Todd Boss has a host of useful stuff at http://www.bossconsulting.com/
sybase_dba/
* I am not sure who this site belongs to, but it contains lots of good stuff.
http://www.rocket99.com/sybase/index.html

Books, Magazines and Articles

* Sybase Documentation http://sybooks.sybase.com
* Intro to Sybase Architecture - http://www2.dgsys.com/~dcasug/sybintro/
intro.html
* SQL Forum http://www.sqlforum.com (sadly the technical papers that were
there are gone).
* Connecting Sybase to the Web - http://www.dbmsmag.com/9711i14.html

Freeware/Shareware

* sybinit4ever: Sybase ASE 11.5 ASCII-only server creation tool - http://
www.sypron.nl/si4evr.html
* Sybase Freeware and Shareware at Ed Barlow's site http://www.edbarlow.com
* Thierry Antinolfi has a very good site packed full of useful tools and
information at http://www.dbadevil.com
* DBD::Sybase http://www.mbay.net/~mpeppler
* DBI/DBD:Sybase on Linux http://www.whirlycott.com/phil/dbdsybase/
* Sybase Scheme Extensions - http://www.cs.indiana.edu/scheme-repository/
ext.html
* SQSH - (SQL SHell for Unix) by Scott Gray http://www.sqsh.org
* ISUG's Freeware Collection http://www.isug.com/ISUG2/links.html
* Sybase to HTML Converter http://www.algonet.se/~bergkarl/lasse/
scripts_eng.html
* Tool to access Sybase server with line editing and history recall http://
www.mcs.net/~ivank/sybtool.html
* Sybase connectivity libraries http://www.sybase.com/products/samples/
* Manish I Shah's Smart Sybase Editor http://asse.sourceforge.net.
* A web to Sybase interface http://archive.eso.org/wdb/html/
* Al Huntley has some nifty tools as well as the DBCC list http://
user.icx.net/~huntley/dbccinfo.htm
* John Knox has a nifty tli2ip and ip2tli converter at http://
www.outlands.demon.co.uk
* A very useful project to build a free set of Open Client libraries is at
http://www.freetds.org
* De Clarke has some very useful SybTcl stuff, start looking at http://
www.ucolick.org/cgi-bin/Tcl/database.cgi. One of the really nice apps is
Sybase PerfMeter.
* An ODBC based Windows isql type client can be found at http://
www.indus-soft.com/winsql/ (there is a free "lite" version and a
comercial version).
* Imran Hussain has written a number of Sybase utilities, they can be found
at http://www.imranweb.com/freesoft.
* Brian Ceccarelli's BrainTools can be accessed from http://
www.talusmusic.com/BrainTools.
* Ginola Pascal's Like Sybase Central can be grabbed from http://
perso.wanadoo.fr/laserquest/linux.

User Groups

* International Sybase User Group - http://www.isug.com
* Indiana Sybase User's Group http://www.cs.bsu.edu/homepages/sam/isug
* Ontario Sybase User Group (OSUG) Website - http://www.interlog.com/~osug
* DCASUG, DC Area Sybase User Group - http://www2.dgsys.com/~dcasug
* New Zealand Sybase User Group - http://www.nzsug.org.nz/
* Wisconsin Sybase User Group - http://www.reveregroup.com/wisug/
* Tampa Bay Sybase User Group - http://www.soaringeagleltd.com/LUG.htm

Related FAQs

* ASE on Linux FAQ - http://www.mbay.net/~mpeppler/Linux-ASE-FAQ.html
* Sybperl FAQ - http://www.mbay.net/~mpeppler/Sybperl/sybperl-faq.html
* Tuning Sybase System 11 for NetWare on Compaq -
http://www.compaq.com/support/techpubs/whitepapers/140a0896.html
* SQR FAQ/User Group - http://www.sqrug.com/
* EAServer FAQ - http://www.ehandelnorden.se/sybase/faq.html/
* BusinessObjects FAQ - http://www.upenn.edu/computing/da/bo/busob-faq.html

Academia

* Yale Centre for Medical Informatics http://paella.med.yale.edu/topics/
database.html
* NC State University http://www.acs.ncsu.edu:80/Sybase
* Simon Fraser University http://www.cs.sfu.ca/CourseCentral/Software/Sybase
* University of California http://www-act.ucsd.edu/webad/sybase.html
* Rutgers http://paul.rutgers.edu/sybase.html

Commercial Links

The following sites are placed here without any endorsement by the FAQ
maintainer.

* Ed Barlow's site of sites http://www.tiac.net/users/sqltech/links.htm#
commercial_links

The mother ship may be reached at http://www.sybase.com

next prev ASE FAQ

Miscellany

12.1 What can Sybase IQ do for me?
12.2 Net-review of Sybase books
12.3 email lists
12.4 Finding Information at Sybase

# prev ASE FAQ

-------------------------------------------------------------------------------

12.1: Sybase IQ

-------------------------------------------------------------------------------

(This deserves to be a section all on its own, as per ASE and ASA. However, I
know absolutely nothing about it. If anyone would like to help, I would be very
grateful for some more information. My expectations are not high though.)

Sybase IQ isn't meant as just an indexing scheme, per se. It is meant as a
means of providing a low cost data warehousing solution for unplanned queries.

By the way, Sybase IQ does not use bitmapped indexes, it uses bitwise indexes,
which are quite different. [Anyone care to add a paragraph explaining the
difference? Ed.]

In data warehousing MIS generally does not know what the queries are. That also
means that the end users often don't know what the queries are. Not knowing
what the queries are turning end users loose on a 500GB operational database to
perform huge queries could prove to be unacceptable (it may bring the system
down a crawl). So, many customers are resorting to separating their operational
databases (OLTP) and data warehousing databases. By providing this separation
the operational database can continue about its business and the data warehouse
users can issue blind queries without affecting the operational systems.
Realize that operational systems may handle anywhere from hundreds to a few
thousand users and, more likely than not, require data that is highly accurate.
However, data warehouse users often don't require up to the second information
and can often wait several hours, 24 hours or even days for the most current
snapshot and generally don't require updates to be made to the data.

So, Sybase IQ can be updated a few times a day, once a day or a few times a
week. Realize that Sybase IQ is strictly a data warehousing solution. It is not
meant for OLTP systems.

Sybase IQ can also sit on top of Sybase SQL Server:

[end user]
|
|
[Sybase IQ]
[Sybase SQL Server]

What happens in this environment is that a data warehouse user can connect to
Sybase IQ. Sybase IQ will then take care of processing the query or forwarding
the query to SQL Server if it determines that the access paths in SQL Server
are faster. An example where SQL Server will be faster than Sybase IQ in
queries is when SQL Server can perform query coverate with the indexes built in
SQL Server.

The obvious question is: why not index every column in SQL Server? Because it
would be prohibitive to update any of the data. Hence, Sybase IQ, where all the
columns are making use of the bitwise index scheme. By the way, you can choose
which columns will be part of an IQ implementation. So, you may choose to have
only 30% of your columns as part of your Sybase IQ implementation. Again, I
can't stress enough that Sybase IQ is strictly for data warehousing solutions,
not OLTP solutions.

Back to top

-------------------------------------------------------------------------------

12.2: Net Book Review

-------------------------------------------------------------------------------

* An Introduction to Database Systems
* Sybase
* Sybase Architecture and Administration
* Developing Sybase Applications
* Sybase Developer's Guide
* Sybase DBA Survival Guide
* Guide to SQL Server
* Client/Server Development with Sybase
* Physical Database Design for Sybase SQL Server
* Sybase Performance Tuning
* Sybase Replication Server, An Administrators Guide
* Optimising Transact-SQL
* Tree and Graph Processing in SQL
* Transact SQL
* Sybase ASE, Database Consistency Checking
* Configuring & Tuning Databases on the Solaris Platform

An Introduction to Database Systems

ISBN: 0-201-54329-X Published by Addison-Wesley. Volume I and II.

This book is rightly regarded by many as the Bible of Database Management
Systems. Not a book that goes into detailed specifics of any particular
implementation (although it draws many examples from DB2), this book covers the
practical theory that underlies all relational systems as well as DBMS in
general. It is written in an easy to read, approachable style, and gives plenty
of practical examples.

Covering all aspects, from straight forward issues (such as what is a
relational database), to practical procedures (all forms of normalization are
covered, and explained). SQL is briefly covered, in just the right amount of
detail. The book includes detailed discussions of issues such as recovery,
concurrency, security and integrity, and extensions to the original relational
model. Current issues are dealt with in detail, such as client/server systems
and the Object Oriented model(s). Literally hundreds of references are included
for further reading.

If you want a book to refer to when your curiosity gets the better of you, or
when a user needs a better understanding of some important database concept,
this is it. It strikes the right balance between theory and practice, and
should be found on every database administrators book shelf.

Sybase - McGoveran and Date

ISBN: 0-201-55710-X Published by Addison-Wesley. 450 pages.

I think that once, not too long ago, this used to be the only book on Sybase
available. Now it seems to be totally out of print! It covered versions of
Sybase SQL server up to 4.8. It covered a number of aspects of Sybase,
including APT.

Sybase Architecture and Administration - Kirkwood

ISBN: 0-13-100330-5 Published by Ellis Horwood. 404 pages.

This is a good book covering Sybase systems up to and including System 10. It
deals to a good depth the architecture and how most of the functions such as
the optimiser work. It explains in a readable style how devices work, and how
indexes are stored and manipulated.

Developing Sybase Applications - Worden

ISBN: 0-672-30700-6 Published by SAMS. ??? pages. (Inc CD.)

This books seems very similar to number 4 to me and so I have not bought it. I
have browsed through several times in the book shop, and decided that his other
book covers a good deal of this. There are chapters on Visual Basic and
Powerbuilder.

Sybase Developer's Guide - Worden

ISBN: 0-672-30467-8 Published by SAMS. 698 pages. (Inc disk.)

This is a big book that does not, in my opinion, cover very much. In fact the
disk that is included contains DBATools, and that seems to sum up the first 50%
of the book. There is a fair amount of coverage of the general architecture and
how to install Sybase. Transact SQL, cursors and stored procedures get a fair
covering, as does using C/C++ with DB-Library. (I can find no mention of
CT-Library.) Unfortunately quite a lot of the book covers general issues which
are not covered in sufficient depth to be useful, and just seem to be there to
give the book bulk. Maybe as a developer's guide, his other book would be a
better buy. This would probably be most useful to a small company implementing
a Sybase database.

Sybase DBA Survival Guide - Jeff Garbus, David Solomon, Brian Tretter

ISBN: 0-672-30651-4 Published by SAMS. 506 pages. (Inc disk.)

This book is good, and is a great help in a crisis. It includes lots of useful
ideas and strategies for most (if not all) of the DBA tasks. It covers Sybase
SQL Server on all platforms. It does not specifically cover any of the
Microsoft versions, and certainly not version 6. It does cover System 10. It is
very good at explaining the output from things like the DBCC commands. There is
also a good section on what to look for in the errorlog. If you are a DBA and
want to buy just one book, I would recommend this one since it covers just
about everything you will need to know. This book is filled with little hints,
tips and warnings which are very useful. They have certainly saved my bacon on
a number of occasions, and have even made me look a real star more than once.

Guide to SQL Server - Aloke Nath

ISBN: 0-201-62631-4 Published by Addison-Wesley. 567 pages.

This book is solely about MS SQL Server, covering 4.2 for OS/2 and SQL Server
NT. It is not bad, but does seem to regurgitate a lot from the Sybase [sic]
manuals. Its coverage is fairly broad dealing with Transact SQL on the one hand
through to client configuration on the other. It does cover the aspects of MS
Sqlserver that are different from Sybase, (dbcc perfmon for instance) but it
does not flag any as such. Probably a good buy if you only have MS Sqlserver
and never intend looking at Sybase.

Client/Server Development with Sybase - Alex Berson and George Anderson,

ISBN: 0-07-005203-4 Published by McGraw-Hill. 743 pages.

I have used this book as a reference when system manuals where not available.
It is much more useful on how thing work and what approach to use rather than
syntax.

The breadth of topics pleased me - all the right jargon is mentioned. The
introduction mentions CORBA and DCE. Sybase RPC is compared to UNIX RPCs.
Middle ware products are discussed. Talks with our sales rep. about the OMNI
and NetGateway product where greatly assisted by using the diagrams in the Open
Server and Gateways chapter.

Like any text, it is dated (as it is printed). The Netgateway diagram does not
show a TCP/IP interface to MVS. However, the information provided is not really
diminished. This goes back to the fact that this is a How Things Work and How
to Use Things book, not a compilation of details on a single version.

Physical Database Design for Sybase SQL - Rob Gillette, Dean Meunch, Jean
Tabaka

ISBN: 0-13-161523-8 Published by Prentice-Hall. 225 pages.

Supposedly the first in a series from Sybase Professional Services, espousing
the Sybase Development Framework or SDF (tm). I've seen no more books, and have
never heard any more about SDF. This book is a reasonable attempt to guide
developers through the process of turning a logical database design into a
physical Sybase implementation.

Topics include:

* Defining Tables and Columns
* Defining Keys
* Identifying Critical Transactions
* Adding Redundant Columns
* Adding Derived Columns
* Collapsing Tables
* Splitting Tables
* Handling Supertypes and Subtypes
* Duplicating Parts of Tables
* Adding Tables for derived Data
* Handling Vector Data
* Generating Sequence Numbers
* Specifying Indexes
* Maintaining Row Uniqueness
* Handling Domain Restrictions
* Handling Referential Integrity
* Maintaining Derived and Redundant data
* Handling Complex Integrity Constraints
* Controlling Access to Data
* Managing Object Sizes
* Recommending Object Placement
* Required Inputs to Physical DB Design
* Naming Guidelines

Covers System 10. Lots of good practical hints and guidelines on database
design. In the absence of any competition - a definite recommendation for
newcomers to Sybase database design.

Sybase Performance Tuning - Shaibal Roy & Marc B. Sugiyama

ISBN 0-13-442997-4 Published by Prentice Hall (http://www.prenhall.com). 622
pages.

Covers the topics:

* Tuning for performance
* Hardware and system software
* Sybase product and feature overview
* SQL Server - form and structure
* SQL Server - methods and features
* Physical database design
* Application development
* Monitoring SQL Server
* Instrumenting SQL Code
* Transaction processing performance
* Query processing performance
* Batch processing performance
* Advanced topics - I/O subsystems, named caches and buffer pools and other
enhancements
* Also a load of extra configuration details.

A pleased customer on the above book:

Just a quick note to let you know of a very good book on Performance Tuning
that isn't mentioned in the Sybase FAQ. I bought it a little while ago and
has quickly become invaluable. It's by two pretty gifted Sybase Engineers
in the SQL Server Performance Team and covers loads of things up to and
including System 11. It deserves to become as big as the bible :)

This I believe is the Holy Grail of Sybase books that a lot of people have
been looking for - an exaggerated claim perhaps - but a damn fine book.

Sybase Replication Server - An Administrators Guide - John Kirkwood and Garry
Arkle

ISBN 0-9537155-0-7 Published by Kirkwood Associates Ltd

This is a very readable introduction and guide to Sybase replication. Having
just installed and configured my first repserver site, this book proved very
useful. Rather than give a whole break down of the contents, the book is
featured on their website http://www.pagelink.demon.co.uk/ where a full
breakdown of the contents etc can be obtained. This is one of the few books on
replication and I can thoroughly recommend it to new users and people with a
fair amount of replication experience. I cannot say whether or not it would be
useful to people with a lot of replication experience since I don't know anyone
of that ilk who has read it.

Optimising Transact-SQL

SQL Forum Press; ISBN: 0964981203

This book is definitely not for the beginner. It covers what the author
describes as characteristic functions. These are functions that allow you to a
lot of data manipulation with a single pass of table. Whether you like them or
not is completely a matter of taste. Read the reviews on Amazon.com to see the
truth in that statement. The book pre-dates the inclusion of the CASE statement
into most SQL dialects, including T-SQL, and it is certainly true that you can
use the case statement to do a lot of what charactersitic functions can do.
However, table pivoting is definitely an exception and there are probably
others. Personally I like the book since it shows a completely different way of
thinking about problems and their solution.

Possibly tricky to get hold of.

Tree and Graph Processing in SQL

SQL Forum Press; ISBN: ???

The only thing I have on this is the following:

The best work I've ever read on the subject of Tree and Graph processing in SQL
is strangely entitled: "Tree and Graph Processing in SQL" by Dr. David
Rozenshtein et al.

Paul Horan [TeamSybase]

There are no reviews on Amazon at this time, so I cannot even send you there.

Possibly tricky to get hold of.

Transact SQL Programming

ISBN 1-56592-401-0 Published by O'Reilly

This book covers both the Sybase and Microsoft dialects of T-SQL. There is a
very clear side-by-side comparison of the two sets of features. There is also
an excellent description of all of the Microsoft features. I find the same is
not so true about the Sybase parts. The actual book is up to nornal O'Reilly
standards and is very readable.

Sybase ASE, Database Consistency Checking

ISBN 0-9537155-1-5 Published by Kirkwood Associates Ltd

This is John Kirkwood's latest offering. The title tells all as far as subject
matter is concerned. An excellent offering, very readable. Covers a lot of the
undocumented dbcc's plus lots of other good stuff. I would have to say a
definite must for all DBAs. Obviously not a book for developers, unless they
are also part time DBAs. However, if you wanted to get a better understanding
of how Sybase internal storeage works, this covers a lot of that.

At the time of writing the book was available from Amazon.co.uk but not
amazon.com. I am not sure if this is likely to change or not. You can always
get it from his own site, http://www.pagelink.demon.co.uk/.

Configuring & Tuning Databases on the Solaris Platform

ISBN: 0-13-083417-3 Published by Sun Microsystems Press. 502 pages.

An excellent book that slices and dices from both OS and database perspectives.
Oracle, Sybase (ASE and a bit of IQ-M), Informix XPS, and DB2 are covered. The
core subject is covered in a drill-down fashion and includes details between
various versions (including Oracle 9i, ASE 12.5, and Solaris 2.8) The author
also covers database architectures, application workloads, capacity planning,
benchmarking (including the various TPC flavors), RAID (including Sun Volume
Manager and Veritas), performance metrics, and JAVA. Even for the non-SUN
environments this book may be quite useful.

Back to top

-------------------------------------------------------------------------------

12.3: email lists

-------------------------------------------------------------------------------
email lists
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | | | | |
|--------------+-------------+----------------------------------------------------+----------------+---------------------------------------------------------------------------------------------------|
| Name | Type | Description | Emails/Day | How to subscribe |
|--------------+-------------+----------------------------------------------------+----------------+---------------------------------------------------------------------------------------------------|
| | | | | Send mail to sqsh-users...@yahoogroups.com |
| sqsh-users | YahooGroups | Bugs/issues/complaints about sqsh - see Q9.12. | < 1 | |
| | | | | Goto http://www.yahoogroups.com for more details. |
|--------------+-------------+----------------------------------------------------+----------------+---------------------------------------------------------------------------------------------------|
| | | | | Send email to sybase-dba...@yahoogroups.com |
| sybase-dba | YahooGroups | Discussion of administration of Sybase databases | < 1 | |
| | | | | Goto http://www.yahoogroups.com for more details. |
|--------------+-------------+----------------------------------------------------+----------------+---------------------------------------------------------------------------------------------------|
| | | | | Send email to LIST...@LISTSERV.UCSB.EDU |
| | | | | |
| SYBASE-L | Listserv | Discussion of SYBASE Products, Platforms & Usage | ~ 10 - 20 | with a subject of |
| | | | | |
| | | | | SUBSCRIBE SYBASE-L your name |
|--------------+-------------+----------------------------------------------------+----------------+---------------------------------------------------------------------------------------------------|
| | | Exactly the same list as above, but through Yahoo. | | |
| | | | | |
| | | One of the nice features of having the group | | |
| SYBASE-L | YahooGroups | mirrored at Yahoo is that it makes trawling the | ~ 10 - 20 | Send email to sybase-l-...@yahoogroups.com |
| | | archives very easy. Goto the website, there are | | |
| | | enough links to it already on this page, feed | | |
| | | 'sybase-l' into the search box, select the correct | | |
| | | group and read. | | |
|--------------+-------------+----------------------------------------------------+----------------+---------------------------------------------------------------------------------------------------|
| | | | | Send email to list...@list.cren.net with a subject of |
| Sybperl | Listserv | Discussion of things Perl and Sybase | < 1 | |
| | | | | SUBSCRIBE SYBPERL-L yo...@email.address |
|--------------+-------------+----------------------------------------------------+----------------+---------------------------------------------------------------------------------------------------|
| | | | | Subscribe by going to |
| ase-linux | Majordomo | Specific discussion of Sybase on Linux | 1 - 5 | |
| | | | | http://www.isug.com/ISUG2/ase_linux_form.html |
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Back to top

-------------------------------------------------------------------------------

12.4: Finding Information at Sybase

-------------------------------------------------------------------------------

Sybase has now gone completely Portal or is that Postal? The front desk is
most definitely www.sybase.com, which leads to a very polished site. A more
useful thing to do is to sign up at the site for your own particular
perspective. You can do this by going to my.sybase.com, where you can
configure your account to only show you those parts of the system that you are
interested or are relevant to you. The links below give you a couple of faster
pointers to some specific sites.

Sybase Web Sites

Caveat: Sybase has implemented a portal. Quite a number of the old links that
were/are in the FAQ now nolonger work. The following is tried and tested as of
today (20th September 2001) but could well become out-of-date. Let's hope not!

Here's a list of internet web sites at Sybase:

Sybase corporate (search, browse)
This is the start of the portal. From here you can get everywhere. The
following links simply allow for a more direct route to a few places.
Sybase Technical Support Web site (gateway, meta-search, browse)
Gateway to all support information at Sybase.
Sybooks-on-the-Web (search, browse)
Sybase Enterprise product manuals. This is the main site for product
manuals. It's browseable and searchable.
Technical Information Library (search, browse)
This is the place to find all of the Answerbase content plus lots more:
FAQs, White Papers, TechNotes, Customer Letters, Certification Reports,
Problem Reports, Release Bulletins and much more. This is a searchable and
browseable site.
Infobases (search, browse)
This link takes you directly to the solved cases area. It is searchable..
Sybase's public news server (browse)
Newsgroups for most Sybase products moderated by Sybase representatives.
Savvy lurkers here.

Getting Sybase Software

There are a few types of software available from Sybase. These include
Enterprise Emergency Bug Fixes (EBF) which are roughly equivalent to patches,
Tools patches and upgrades, Beta software downloads,

Electronic Software Distribution (ESD)
EBFs for Enterprise, Workplace and Tools products
Free Sybase Software Downloads
Downloadable Sybase software found here includes demos, betas and test
drives of Sybase software.
Sybase E-Shop
Online ordering of Sybase software and accessories. Items ordered here will
be ground shipped. This service is only available to customers in the US
and Canada.

Back to top

-------------------------------------------------------------------------------

# prev ASE FAQ

David Owen

unread,
Apr 20, 2004, 9:45:17 AM4/20/04
to
Archive-name: databases/sybase-faq/part18

URL: http://www.isug.com/Sybase_FAQ
Version: 1.7
Maintainer: David Owen
Last-modified: 2003/03/02
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

#!/usr/bin/perl

# Author: Vincent Yin (um...@mctrf.mb.ca) Aug 1994 Last Modified: May 1996

chomp($basename = `basename $0`);

$usage = <<EOF;
USAGE
$basename database userid passwd pattern [ pattern... ]

DESCRIPTION
Prints isql scripts that would insert records into the
tables whose names match any of the patterns in command line. In
other words, this program reverse engineers the data in a given
table(s). Roughly, it `select * from <table>', analyses the data
and table structure, then prints out a bunch of
insert <table> values ( ... )
statements that would re-populate the table. It's an alternative
to `bcp'. `bcp' has its limitations (e.g. one often needs to turn on
"select into/bulk copy" option in the database before running bcp.)

Table names are matched to <pattern> with Transact-SQL's LIKE clause.
When more than one pattern is specified on command line, the LIKE
clauses are OR'ed. In any case, the LIKE clause(s) is logged to
the beginning of the output as a comment, so that you'll see how this
program interprets the command line.

The SQL script is printed to stdout. Since it only prints out the SQL
but doesn't submit it to the SQL server, this procedure is safe to run.
It doesn't modify database in any way.

EXAMPLES
To print this usage page:
% $basename
To print SQL that populates the table master..sysobjects and systypes:
% $basename master userid passwd "sysobjects" "systypes"
To print SQL that populates all system tables in master db:
% $basename master userid passwd "sys%"

BUGS
Embedded line breaks in strings are allowed in Sybase's isql, but not
allowed in SQLAnywhere's isql. So this script converts embedded line
breaks (both DOS styled and UNIX styled) to blank characters.

EOF

$batchsize = 10; # The number of INSERTs before a `go' is issued.
# This is to make the output compact.

# .................... No change needed below this line ........................

use Sybase::DBlib;

die $usage unless $#ARGV >= 3;
($db, $user, $passwd, @pattern) = @ARGV;

$likeclause = &sql_pattern_to_like_clause("name", @pattern);

print <<EOF;
-- This script is created by $0.


-- It would generate INSERT statements for tables whose names match the
-- following pattern:

/* $likeclause
*/

set nocount on
go
EOF

$dbh = new Sybase::DBlib $user, $passwd;
$dbh->{dbNullIsUndef} = 1;
$dbh->dbuse($db);

# Get the list of tables.
$tablelist = $dbh->sql("select name
from sysobjects
where type in (\'S\',\'U\')
and $likeclause
order by name
");

foreach $tableref (@$tablelist) {
$table = @$tableref[0];
print "\n\n/*.............. $table ...............*/\n";
print "-- ", `date`, "\n";
print "declare \@d datetime\n";
print "select \@d = getdate()\n";
print "print ' %1! $table', \@d\ngo\n\n";
print "truncate table $table -- Lookout !!!!!!\ngo\n\n";

$dbh->dbcmd("select * from $table");
$dbh->dbsqlexec;
$dbh->dbresults;

while (@row = $dbh->dbnextrow()) {
print "insert $table values(";

for ($i=0; $i <= $#row; $i++) { # build the INSERT statement
# Analyse datatype to decide if this column needs to be quoted.
$coltype = $dbh->dbcoltype($i+1);

if (!defined($row[$i])) {
print "NULL"; # Never quote NULL regardless of datatype
}
elsif ($coltype==35 or $coltype==39 or $coltype==47 or
$coltype==58 or $coltype==61 or $coltype==111 ){
# See systypes.type/name for explanation of $coltype.
$row[$i] =~ s/\r|\n/ /g; # Handles both DOS and UNIX line breaks
$row[$i] =~ s/\'/\'\'/g; # Stuff double quotes
print '\'' . $row[$i] . '\'';
} else {
print $row[$i];
}
print ", " unless $i == $#row;
}

print ")\n"; # wrap up the INSERT statement.
# print `go' at every $batchsize interval.
print "go\n" unless $dbh->DBCURROW % $batchsize;
}
print "\ngo\n\n"; # print a `go' after the entire table is done.
print "-- ### End for $table: rowcount = ", $dbh->DBCURROW, "\n";
}

# ................................. sub ........................................
sub sql_pattern_to_like_clause {
local($field_name, @pattern) = @_;
$like_clause = "\t( 1 = 0 ";
foreach (@pattern) {
$like_clause .= "\n or $field_name like '" . $_ . "' ";
}
$like_clause .= "\n\t) \n";
}
#!/bin/sh
#-*-sh-*-
# Code for question 9.3: Generating dump/load database command.
#
# This script calls the function gen_dumpload_command to generate
# either a dump or a load command.
#
# This function works for both System 10 and Sybase 4.x
# installations. You simply need to change your method of thinking.
# In Sybase 4.x, we only had a single stripe. In System 10, most
# of the time we define a single stripe but in our bigger databases
# we define more stripes.
#
# Therefore, everything is a stripe. Whether we use one stripe or
# many... cool? Right on!
#
#
# The function gen_dumpload_command assumes that all dump devices
# adhere to the following naming convention:
#
# stripe_NN_database
#
# NOTE: If your shop is different search for "stripe" and replace
# with your shop's value.
#
#


# gen_dumpload_command():
#
# purpose: to generate a dump/load to/from command based on
# what is defined in sysdevices. The environment
# variable D_DEV is set.
#
# return: zero on success, non-zero on failure.
#
# sets var: D_DEV is set with the actual dump/load command;
# stripe devices are also handled.
#
# calls: *none*
#
# parms: 1 = DSQUERY
# 2 = PASSWD
# 3 = DB
# 4 = CMD -> "dump" or "load"
#


gen_dumpload_command()
{
LOCAL_DSQUERY=$1
LOCAL_PASSWD=$2
DB_TO_AFFECT=$3
CMD=$4 # dump/load

if [ "$CMD" = "dump" ] ; then
VIA="to"
else
VIA="from"
fi

# Check for a dump device

echo "Checking for standard $CMD device"
# D_DEV=`echo "$LOCAL_PASSWD
$SYBIN/isql -U sa -S $LOCAL_DSQUERY -w1000 | sed -n -e '/stripe/p' | \
select name from sysdevices where name like \"stripe%_$DB_TO_AFFECT\"
go"
EOSQL
gawk '{ if (NR == 1) print "'$CMD' database '$DB_TO_AFFECT' '$VIA'", $0
else print "stripe on", $0
}'`

if [ -z "$D_DEV" ] ; then # nothing defined... :(
return 1
fi

return 0
}

SYBIN=$SYBASE/bin

gen_dumpload_command $1 $2 $3 $4

if [ $? -eq 1 ] ; then
echo "Error..."
exit 1
fi

# so what does this generate? :-)
echo $D_DEV

# ... and it can be used as follows:

echo "$PASSWD
$D_DEV
go" | isql ...

exit 0
#!/usr/bin/perl

# $Id: int.pl,v 1.4 1995/11/04 03:16:38 mm Exp mm $

# convert a sun4 interfaces file to a different format (see @modelist)
# limitations:
# - does not handle tli/spx entries (yet)
# - drivers for desktop platform hard coded
# - no sanity checks (duplicate names, incomplete entries)
# - ignores extraneous tokens silently (e.g. a 6th field)
# - don't know whether/how to convert decnet to tli format
# - ???

require 'getopts.pl';

sub usage
{
local(@token) = @_;

if (!($token[0] eq 'short' || $token[0] eq 'long'))
{
printf STDERR "Environment variable(s) @token not defined.\n";
exit (1);
}

print STDERR <<EOM;
Usage: $progname -f <sun4 interfaces file>
-o { $modetext1 }
[-V] [-v] [-h]
EOM

if ($token[0] eq 'long')
{
print STDERR <<EOM;
where
-f <file> input file to process
-o <mode> specify output mode
(e.g. $modetext2)


-V turn on verbose mode
-v print version string
-h print this message

EOM
}
else
{
print STDERR "For more details run $progname -h\n";
}
exit(1);
} # end of usage


# FUNCTION NAME: parse_command_line
# DESCRIPTION: call getopts and assign command line arguments or
# default values to global variables
# FORMAL PARAMETERS: none
# IMPLICIT INPUTS: command line arguments
# IMPLICIT OUTPUTS: $inputfile, $mode, $verbose
# RETURN VALUE: none, exits (in usage) if -h was specified
# (help option).
# SIDE EFFECTS: none
#
sub parse_command_line {
&Getopts('f:o:hvV') || &usage('short');
$inputfile = $opt_f;
$mode = $opt_o;
$verbose = $opt_V ? 1 : 0;

print("$progname version is: $version\n"), exit 0 if $opt_v;
&usage('long') if $opt_h;
&usage('short') if ! $inputfile || ! $mode;
&usage('short') if ! grep($mode eq $_, @modelist);
} # end of parse_command_line

# FUNCTION NAME: process_file
# DESCRIPTION: parse file, try to convert it line by line.
# FORMAL PARAMETERS: $file - file to process
# IMPLICIT INPUTS: none
# IMPLICIT OUTPUTS: none
# RETURN VALUE: none
# SIDE EFFECTS: none

sub process_file {
local($file) = @_;
open(INPUT, "<$file") ||
die "can't open file $file: $!\nExit.";
local($line) = 0;
local($type, $prot, $stuff, $host, $port, $tmp);
print $os2_header if $mode eq 'os2';
while (<INPUT>)
{
$line++;
# handle empty lines (actually lines with spaces and tabs only)
#print('\n'), next if /^\s*$/;
next if /^\s*$/;
chop;
# comments, strip leading spaces and tabs
s/^\s*//, print("$_$lf{$mode}\n"), next if /^\s*#/;
#s/^\s*//, next if /^\s*#/;

# server names
if (/^\w+/)
{
if ($mode eq 'sol' || $mode eq 'ncr'
|| $mode eq 'vms' || $mode eq 'nw386')
{
print "$_$lf{$mode}\n";
next;
}
elsif ($mode eq 'os2')
{
$server = $_;
next;
}
else {
print "[$_]$lf{$mode}\n" if !(/SPX$/);
next;
}
}

if (/^\tmaster|^\tquery|\tconsole/)
{
# descriptions
# parse first whitespace delimited word and
# following space(s)
# quietly ignore any extraerraneous characters
# I actually tried to catch them, but - believe
# it or not - perl would chop off the last digit of
# $port. vvvv
# /^\t(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\d+)(.+)$/;
if (!(($type, $prot, $stuff, $host, $port) =
/^\t(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)/))
{
print STDERR "line $line: unknown format: $_";
next;
}
#print ("line $line: more than 5 tokens >$etc<, \n"),
# next if $etc;
if (!($type eq 'master' || $type eq 'query'
|| $type eq 'console'))
{
# unknown type
print STDERR "line $line: unknown type $type\n";
next;
}
if ($prot eq 'tli')
{
#print STDERR "line $line: can't handle tli",
# " entries (yet)\n";
# adjust to tli format
($layer, $prot, $device, $entry) =
($prot, $stuff, $host, $port);
print "\t$type tli $prot $device ",
"$entry$lf{$mode}\n" if $mode ne 'win3';
next;
}
if (!($prot eq 'tcp' || $prot eq 'decnet'))
{
# unknown protocol
print STDERR
"line $line: unknown protocol $prot\n";
next;
}
if ($mode eq 'sol' || $mode eq 'ncr' || $mode eq 'nw386')
{
$ip = &get_ip_address($host, 'hex');
$hexport = sprintf("%4.4x", $port);
print "\t$type tli $prot $device{$prot} \\x",
"$prefix{$mode}$hexport$ip$nulls{$mode}\n";
next;
}
if ($mode eq 'vms')
{
$ip = &get_ip_address($host, 'dot');
print "\t$type $prot $stuff $ip $port\n";
next;
}
if ($mode eq 'nt386')
{
$type =~ tr/a-z/A-Z/;
print "\t$type=$sock{$mode},$host,",
"$port$lf{$mode}\n";
next;
}
if ($mode eq 'dos' || $mode eq 'win3')
{
next if $type ne 'query';
print "\t${mode}_$type=$sock{$mode},",
"$host,$port$lf{$mode}\n";
next;
}
if ($mode eq 'ntdoswin3')
{
($tmp = $type) =~ tr/a-z/A-Z/;
# watch out for this local($mode) !!
# its scope is this BLOCK only and
# (within this block) overrides the
# other $mode!!! But we can still access
# the array %sock.
local($mode) = 'nt386';
print "\t$tmp=$sock{$mode},$host,$port",
"$lf{$mode}\n";
next if $type ne 'query';
$mode = 'dos';
print "\t${mode}_$type=$sock{$mode},",
"$host,$port$lf{$mode}\n";
$mode = 'win3';
print "\t${mode}_$type=$sock{$mode},",
"$host,$port$lf{$mode}\n";
next;
}
if ($mode eq 'os2')
{
print " \'$server\' \'$type\' \'$sock{'os2'}",",$host,$port\'\n";
next;
}
}
printf STDERR "line $line is ->%s<-\n", chop($_);
}
close(INPUT);
print $os2_tail if $mode eq 'os2';

} # end of process_file

# FUNCTION NAME: print_array
# DESCRIPTION: print the array
# FORMAL PARAMETERS: *array - array to be printed, passed by reference
# IMPLICIT INPUTS: none
# IMPLICIT OUTPUTS: none
# RETURN VALUE: none
# SIDE EFFECTS: none
#
sub print_array {
local(*array) = @_;
foreach (sort keys %array)
{
printf STDERR "%-16s %s\n", $_, $array{$_};
}

} # end of print_array

# FUNCTION NAME: get_ip_address
# DESCRIPTION: get the ip address of a host specified by name, return
# it as a string in the requested format, e.g.
# requested format == 'dot' --> return 130.214.140.2
# requested format == 'hex' --> return 82d68c02
# In order to avoid repeated calls of gethostbyname with
# the same host, store (formatted) results of gethostbyname
# in array %map.
# FORMAL PARAMETERS: name of host, requested return type: hex or dot format
# IMPLICIT INPUTS: %map
# IMPLICIT OUTPUTS: none
# RETURN VALUE: ip address
# SIDE EFFECTS: maintains %map, key is host name, value is ip address.
#
sub get_ip_address {
local($host, $mode) = @_;
if (!$map{$host})
{
#print "calling gethostbyname for $host";
($name, $aliases, $addrtype, $length, @addrs) =
gethostbyname($host);
$map{$host} = join('.', unpack('C4', $addrs[0]));
if ($mode eq 'hex')
{
$map{$host} = sprintf("%2.2x%2.2x%2.2x%2.2x",
split(/\./, $map{$host}));
}
#print " - $map{$host}\n";
}
return $map{$host};
} # end of get_ip_address


$version = "\$Id: int.pl,v 1.4 1995/11/04 03:16:38 mm Exp mm \$";
$| = 1;
($progname = $0) =~ s#.*/##g;
@modelist = ('sol', 'ncr', 'vms', 'nw386', 'os2',
'nt386', 'win3', 'dos', 'ntdoswin3');
$modetext1 = join('|', @modelist);
$modetext2 = join(', ', @modelist);

# tli on solaris needs more zeroes
$nulls{'sol'} = "0000000000000000";
$nulls{'nw386'} = "0000000000000000";
$nulls{'ncr'} = "";
$nulls{'nt386'} = "";

# prefix for tli entries
$prefix{'sol'} = "0002";
$prefix{'nw386'} = "0200";
$prefix{'ncr'} = "0002";
$prefix{'nt386'} = "0200";

# protocol devices
$device{'tcp'} = "/dev/tcp";
$device{'spx'} = "/dev/nspx";
$device{'decnet'} = "/dev/tcp";

# socket driver names
$sock{'nt386'}="NLWNSCK";
$sock{'dos'}="NLFTPTCP";
$sock{'win3'}="WNLWNSCK";
$sock{'os2'}="nlibmtcp";

# linefeed's (^M) for the MS world
$lf{'nt386'}="
";
$lf{'dos'}="
";
$lf{'win3'}="
";
$lf{'ntdoswin3'}="
";
$lf{'os2'}="";
$lf{'vms'}="";
$lf{'sol'}="";
$lf{'ncr'}="";
$lf{'nw386'}="";

$os2_header = sprintf("STRINGTABLE\nBEGIN\n%s", " \'\'\n" x 10);
$os2_tail = "END\n";

&parse_command_line;
&process_file($inputfile);
&print_array(*map) if $verbose;
#!/usr/bin/perl -w

use Getopt::Std;
use strict;
use English;

my($fullRow, @processStats, $owner, $pid, $parentPid);
my($started, $engineNum, %engine);
my($cpuTime, $servType, $param, $servParam, @dirComps);
my(@engineParts, %stypes, @procParts);
my($serverName, %server, $srvType, $engine);
my($cmd);

# (Empirically) I have found with large numbers of engines, that not
# all of the child parent relationships are as you imagine, ie engine
# 0 does not start off all other engines. "-l" indents to show this
# heirarchy.

getopts('l');

# Script to show, in a nice fashion, all of the Sybase servers on a
# system.

$cmd = "ps -ef -o user,pid,ppid,start,comm";

SWITCH:
for ($OSNAME) {

/AIX|OSF1/ && do {
$cmd = "ps auwwx";
last SWITCH;
};

/freebsd/ && do {
$cmd = "ps ps awwxo user,pid,ppid,start,command";
last SWITCH;
};

/linux/ && do {
$cmd = "ps -awxo user,pid,ppid,stime,command";
last SWITCH;
};

/solaris/ && do {
$cmd = "ps -ef -o user,pid,ppid,stime,args";
last SWITCH;
};
}


open(PSCMD, "$cmd |") or die("Cannot fork: $!");

while (<PSCMD>) {
next if !/dataserver|backupserver|repserver|rsmsrvr|monserver/;

# Remove any white space after the -[sS] command.

s/(-[sS])[\s\s*]/$1/;

# Remove leading space.

s/^ *//;

$fullRow = $_;
@processStats = split(/\s+/);

$owner = shift(@processStats);
$pid = shift(@processStats);
$parentPid = shift(@processStats);
$started = shift(@processStats);

# $cpuTime = shift(@processStats);

$cpuTime = 999;

# Is it a parent or a child?

if ($fullRow =~ /-ONLINE:/) {
# Child!
@procParts = split(/[:]/, $processStats[1]);
@engineParts = split(/[,]/, $procParts[1]);
$engineNum = $engineParts[0];
push(@{ $engine{$parentPid} }, [ $pid, $engineNum, $cpuTime ]);
} else {

$servParam = shift(@processStats);
@dirComps = split(/\//, $servParam);
$servType = pop(@dirComps);

PROCSTAT:
foreach $param (@processStats) {
if ($param =~ /^-[sS]/) {
$serverName = substr($param, 2);
last PROCSTAT;
}
}
$server{$pid} = [ $serverName, $owner, $started ];
push(@{ $stypes{$servType} }, $pid);
push(@{ $engine{$pid} }, [ $pid, 0, $cpuTime ]);
}
}

close(PSCMD);

foreach $srvType (keys(%stypes)) {
print "\n$srvType\'s\n";
print "-" x (length($srvType) + 2);


foreach $pid (@{ $stypes{$srvType} }) {
print "\n $server{$pid}[0] Owner: $server{$pid}[1], Started: $server{$pid}[2]";

printEngines($pid, 0);
}
print "\n";
}

print "\n";
$Getopt::Std::opt_l = 0;

sub printEngines {

my($pid) = shift;
my($level) = shift;

if (defined($engine{$pid})) {
foreach $engine (@{ $engine{$pid} }) {
print "\n ";

print " " x $level if defined($Getopt::Std::opt_l);

printf "Engine: %2.2s (PID: %s)", @$engine[1], @$engine[0];

if (@$engine[0] ne $pid) {
printEngines(@$engine[0], $level + 1);
}
}
}
}


use sybsystemprocs
go

CREATE PROCEDURE sp__create_crosstab
,@code_table varchar(30) -- table containing code lookup rows
,@code_key_col varchar(30) -- name of code/lookup ID column
,@code_desc_col varchar(30) -- name of code/lookup descriptive text column
,@value_table varchar(30) -- name of table containing detail rows
,@value_col varchar(30) -- name of value column in detail table
,@value_group_by varchar(30) -- value table column to group by.
,@value_aggregate varchar(5) -- operator to apply to value being aggregated

AS
/*
Copyright (c) 1997, Clayton Groom. All rights reserved.
Procedure to generate a cross tab query script
Reqires:
1. A lookup table with a code/id column and/or descriptive text column
2. A data table with a foreign key from the lookup table & a data value to aggregate
3. column(s) name from data table to group by
4. Name of an aggregate function to perform on the data value column.
*/

set nocount on

if sign(charindex(upper(@value_aggregate), 'MAX MIN AVG SUM COUNT')) = 0
BEGIN
print "@value_aggregate value is not a valid aggregate function"
-- return -1
END

declare @value_col_type varchar(12) -- find out data type for aggregated column.
,@value_col_len int -- get length of the value column
,@str_eval_char varchar(255)
,@str_eval_int varchar(255)
-- constants
,@IS_CHAR varchar(100) -- character data types
,@IS_NOT_ALLOWED varchar(100) -- data types not allowed
,@IS_NUMERIC varchar(255) -- numeric data type names
,@NL char(2) -- new line
,@QUOTE char(1) -- ascii character 34 '"'
--test variables
,@value_col_is_char tinyint -- 1 = string data type, 0 = numeric or not allowed
,@value_col_is_ok tinyint -- 1 = string or numeric type, 0 = type cannot be used.
,@value_col_is_num tinyint -- 1 = numeric data type, 0 = string or not allowed

select @IS_CHAR = 'varchar char nchar nvarchar text sysname'
,@IS_NOT_ALLOWED= 'binary bit varbinary smalldatetime datetime datetimn image timestamp'
,@IS_NUMERIC = 'decimal decimaln float floatn int intn money moneyn numeric numericn real smallint smallmoney tinyint'
,@NL = char(13) + char(10)
,@QUOTE = '"' -- ascii 34

-- get the base data type & length of the value column. Is it a numeric type or a string type?
-- need to know this to use string or numeric functions in the generated select statement.
select @value_col_type = st.name
,@value_col_len = sc.length
from syscolumns sc
,systypes st
where sc.id = object_id(@value_table)
and sc.name = @value_col
and sc.type = st.type
and st.usertype = (select min(usertype)
from systypes st2
where st2.type = sc.type)
--select @value_col_type, @value_col_len

select @value_col_is_char = sign(charindex( @value_col_type, @IS_CHAR))
,@value_col_is_ok = 1 - sign(charindex( @value_col_type, @IS_NOT_ALLOWED))
,@value_col_is_num = sign(charindex( @value_col_type, @IS_NUMERIC))

IF @value_col_is_ok = 1
BEGIN
if @value_col_is_char = 1
begin
select @str_eval_char = ''
end
else
if @value_col_is_num = 1
begin
select @str_eval_char = ''
end
else
begin
print " @value_col data type unnown. must be string or numeric"
-- return -1
end
END
ELSE --ERROR
BEGIN
print " @value_col data type not allowed. must be string or numeric"
-- return -1
END

-- template. first level expansion query.
-- result must be executed to generate final output query.

SELECT "select 'select " + @value_group_by + "'"
IF @value_col_is_char = 1
BEGIN
SELECT "select '," + @QUOTE + "' + convert(varchar(40), " + @code_desc_col+ " ) + '" + @QUOTE + @NL
+ " = "
+ @value_aggregate
+ "(isnull( substring("
+ @value_col
+ ", 1, ( "
+ convert(varchar(3), @value_col_len )
+ " * charindex( "
+ @QUOTE
+ "'+"
+ @code_key_col
+ "+'"
+ @QUOTE
+ ", "
+ @code_key_col
+ " ))), "
+ @QUOTE + @QUOTE
+ "))'"
END
ELSE IF @value_col_is_num = 1
BEGIN
SELECT "select '," + @QUOTE + "' + convert(varchar(40), " + @code_desc_col+ " ) + '" + @QUOTE + @NL
+ " = "
+ @value_aggregate
+ "("
+ @value_col
+ " * charindex( "
+ @QUOTE
+ "'+"
+ @code_key_col
+ "+'"
+ @QUOTE
+ ", "
+ @code_key_col
+ "))'"
END
SELECT "from " + @code_table + @NL
+ "select 'from " + @value_table + "'" + @NL
+ "select 'group by " + @value_group_by + "'"

-- end
go
use sybsystemprocs
go

if object_id('sp__indexreport') is not null
drop procedure sp__indexreport
go

/*
** A system sproc to report on user indexes.
**
** Written by Anthony Mandic - July 2000.
*/
create procedure sp__indexreport
as

if @@trancount = 0
set chained off

set transaction isolation level 1

set nocount on

/*
** Check for user tables first.
*/
if (select count(*) from sysobjects where type = "U") = 0
begin
print "No user tables found in current database"
return 1
end

/*
** Check for tables without any indexes.
*/
select name
into #tablelist
from sysindexes
group by id
having count(id) = 1
and indid = 0
and id > 99
and name not like "#tablelist%" /* Avoid finding it if run in tempdb */

if @@rowcount > 0
select "Tables without indexes" = name
from #tablelist
order by name

drop table #tablelist

/*
** Select all user indexes where there are multiple indexes on a table.
*/
select tid = id,
tname = object_name(id),
iname = name,
iid = indid,
indexcolumns = convert(varchar(254), "")
into #indexlist
from sysindexes
where id > 99
and indid between 1 and 254
group by id
having count(id) > 1
and indid between 1 and 254

if @@rowcount = 0
begin
print "No duplicate indexes found in current database"
return 1
end

declare @count int,
@tid int,
@size int,
@icolumns varchar(254)

select @count = 1

while @count < 17 /* 16 appears to be the max number of indexes */
begin
update #indexlist
set indexcolumns =
case
when @count > 1 then indexcolumns + ', '
end
+ index_col(tname, iid, @count)
where index_col(tname, iid, @count) is not null

if @@rowcount = 0
break

select @count = @count + 1
end

create table #finallist
(
table_name varchar(30),
index_name varchar(30),
tid int,
index_columns varchar(254)
)

insert #finallist
select b.tname,
b.iname,
b.tid,
b.indexcolumns
from #indexlist a,
#indexlist b
where a.tid = b.tid
and a.indexcolumns like b.indexcolumns + '%'
group by a.tid,
a.iname
having count(*) > 1
and a.tid = b.tid
and a.indexcolumns like b.indexcolumns + '%'

if (select count(*) from #finallist) = 0
begin
print "No duplicate indexes found in current database"
return 1
end

select @size = low / 1024
from master..spt_values
where number = 1
and type = "E"

print "Duplicate leading index columns"
print "-------------------------------"
print ""

/*
** The distinct is needed to eliminate duplicated identical indexes on tables.
** The order by is to get the resultant distinct list sorted.
*/
select distinct
"table name" = table_name,
"index name" = index_name,
"size" = str(
(data_pgs(id, doampg) + data_pgs(id, ioampg)) * @size)
+ " KB",
"index columns" = index_columns
from #finallist,
sysindexes
where id = tid
and name = index_name
order by table_name, index_columns

return 0
go

exec sp_procxmode 'sp__indexreport', 'anymode'
go

grant execute on sp__indexreport to public
go
set flushmessage on
go

use sybsystemprocs
go

if exists (select 1
from sysobjects
where sysstat & 7 = 4
and name = 'sp__optdiag')
begin
print "Dropping sp__optdiag"
drop procedure sp__optdiag
end
go

print "Installing sp__optdiag"
go

create procedure sp__optdiag
@tabname varchar(62) = null, /* user table name */
@colname varchar(30) = null, /* column name */
@option varchar(60) = null /* output format */
, @proc_version varchar(78) = "sp__optdiag/0.4/0/P/KJS/AnyPlat/AnyOS/G/Fri Jan 5 14:56:32 2001"
as
/*************************************************************************************************
**
** Description: Format opdiag info from stored procedure
**
** Options: NULL - default
**
** "V/?/HELP/H" - will print the current version string of this proc
** "CR" - will approximate cluster ratio calculations. Note that these are simply
** simply approximations since cluster ratio calculations are not published.
** (future change, not supported yet)
**
** Future Info: Other options can be added in the future
** using the @option parameter.
**
** Dependencies: This proc relies on the object_id built-in
** and sp_namecrack
**
** Errors:
**
** Version: This proc is for ASE 11.9.x and beyond
**
** Usage: exec <dbname>..sp__optdiag <tabname>, <colname>, <opt>
**
** History: 10/31/2000 (ksherlock) 0.1
** Original
** 11/14/2000 (ksherlock) 0.2
** Fixed bug to handle binary histograms and handle user defined types
** 12/20/2000 (ksherlock) 0.3
** Fixed bug with column groups not being retrieved in col_cursor
** 01/05/2001 (ksherlock) 0.4
** Final version which handles numeric decimals correctly
**
*************************************************************************************************/

declare
@colid int /* Variable to hold colid from syscolumns */
, @tabid int /* Variable to hold object_id from sysobjects */
, @tabtype char(2) /* Variable to hold type from sysobjects */
, @s_dbname varchar(30)
, @s_tabowner varchar(30)
, @s_tabname varchar(30)
, @u_tabname varchar(30)
, @u_tabowner varchar(30)
, @colgroup_name varchar(255)
, @u_dbname varchar(30)
, @u_dbid int
, @colidarray varbinary(100)
, @colidarray_len smallint
, @indid int
, @index_cols varchar(254)
, @index_name varchar(30)
, @keycnt int
, @dol_clustered int
, @clustered int
, @last_updt varchar(28)
, @c1stat int
, @statid smallint
, @used_count int
, @rownum int
, @coltype int
, @typename varchar(30)
, @collength varchar(5)
, @precision varchar(3)
, @scale varchar(3)
, @rc_density varchar(24)
, @tot_density varchar(24)
, @r_sel varchar(24)
, @between_sel varchar(24)
, @freq_cell smallint
, @steps_act int
, @steps_req int
, @step char(9)
, @weight char(10)
, @prev_step char(9)
, @prev_weight char(10)
, @value_raw varbinary(255)
, @value_c varchar(255)
, @leafcnt varchar(32) -- int
, @pagecnt varchar(32) -- int
, @emptypgcnt varchar(32) -- int
, @rowcnt varchar(32)
, @forwrowcnt varchar(32)
, @delrowcnt varchar(32)
, @dpagecrcnt varchar(32)
, @dpagecr varchar(32)
, @ipagecrcnt varchar(32)
, @ipagecr varchar(32)
, @drowcrcnt varchar(32)
, @drowcr varchar(32)
, @oamapgcnt varchar(32) -- int
, @extent0pgcnt varchar(32)
, @datarowsize varchar(32)
, @leafrowsize varchar(32)
, @indexheight varchar(32) -- int
, @spare1 varchar(32) -- int
, @spare2 varchar(32)
, @ptn_data_pgs int
, @seq int


if @@trancount = 0
begin
set chained off
end

set transaction isolation level 1
set nocount on
set flushmessage on

if ( (select lower(@option)) in ("v","version","?","h","help") )
begin
print "%1!",@proc_version
return 0
end

exec sp_namecrack @tabname, " ", @s_dbname out, @s_tabowner out, @s_tabname out
select @s_dbname = isnull(@s_dbname,db_name())

declare object_cursor cursor for
select id,
db_name(),
db_id(),
user_name(uid),
name
from sysobjects
where user_name(uid) like isnull(@s_tabowner,"%")
and name like isnull(@s_tabname,"%")
and type = "U" and id > 100
order by user_name(uid), name
for read only

declare index_cursor cursor for
select st.indid
, si.name
, abs(sign(si.status2 & 512)) /* DOL clustered index */
, abs(sign(si.status & 16)) /* clustered bit */
, si.keycnt
from systabstats st, sysindexes si
where st.id = @tabid
and si.id = @tabid
and st.id = si.id
and st.indid = si.indid
order by st.indid
for read only

declare col_cursor cursor for
select sc.colid,
ss.colidarray,
datalength(ss.colidarray),
sc.name,
ss.statid,
convert(int,ss.c1),
convert(varchar,ss.moddate,109),
ltrim(str(round(convert(double precision,ss.c2),16),24,16)),
ltrim(str(round(convert(double precision,ss.c3),16),24,16)),
convert(int,ss.c4),
convert(int,ss.c5),
st.name,
ltrim(str(convert(int,ss.c7),5)),
ltrim(str(convert(int,ss.c8),3)),
ltrim(str(convert(int,ss.c9),3)),
ltrim(str(round(convert(double precision,ss.c10),16),24,16)),
ltrim(str(round(convert(double precision,ss.c11),16),24,16))
from syscolumns sc, sysstatistics ss, systypes st
where sc.id = @tabid
and sc.name like isnull(@colname,"%")
and ss.id = sc.id
and convert(int,ss.c6) *= st.type
and st.name not in ("timestamp","sysname", "nchar", "nvarchar")
and st.usertype < 100
and convert(tinyint,substring(ss.colidarray,1,1)) = sc.colid
and ss.formatid = 100
order by sc.id, sc.name, ss.colidarray
for read only

declare nostats_cursor cursor for
select sc.name
from syscolumns sc,
sysstatistics ss
where ss.id =* sc.id
and sc.id = @tabid
and ss.formatid = 100
and ss.statid = 0
and ss.sequence = 1
and sc.colid *= convert(tinyint,substring(ss.colidarray,1,1))
and datalength(ss.colidarray) = 1
group by sc.name
having count(ss.id) = 0
order by sc.name
for read only

create table #cells(seq int,colnum int)

/** DO NOT FOLD, SPINDAL, OR MUTILATE (unless its sysstatistics) **/
/** OK, bear with me, here we go... **/

declare histogram_cursor cursor for
select
/** Here is the step number **/
str(
((c.seq-1)*80 + 1 )*(1-abs(sign(c.colnum-1 ))) + ((c.seq-1)*80 + 2 )*(1-abs(sign(c.colnum-2 ))) +
((c.seq-1)*80 + 3 )*(1-abs(sign(c.colnum-3 ))) + ((c.seq-1)*80 + 4 )*(1-abs(sign(c.colnum-4 ))) +
((c.seq-1)*80 + 5 )*(1-abs(sign(c.colnum-5 ))) + ((c.seq-1)*80 + 6 )*(1-abs(sign(c.colnum-6 ))) +
((c.seq-1)*80 + 7 )*(1-abs(sign(c.colnum-7 ))) + ((c.seq-1)*80 + 8 )*(1-abs(sign(c.colnum-8 ))) +
((c.seq-1)*80 + 9 )*(1-abs(sign(c.colnum-9 ))) + ((c.seq-1)*80 + 10)*(1-abs(sign(c.colnum-10))) +
((c.seq-1)*80 + 11)*(1-abs(sign(c.colnum-11))) + ((c.seq-1)*80 + 12)*(1-abs(sign(c.colnum-12))) +
((c.seq-1)*80 + 13)*(1-abs(sign(c.colnum-13))) + ((c.seq-1)*80 + 14)*(1-abs(sign(c.colnum-14))) +
((c.seq-1)*80 + 15)*(1-abs(sign(c.colnum-15))) + ((c.seq-1)*80 + 16)*(1-abs(sign(c.colnum-16))) +
((c.seq-1)*80 + 17)*(1-abs(sign(c.colnum-17))) + ((c.seq-1)*80 + 18)*(1-abs(sign(c.colnum-18))) +
((c.seq-1)*80 + 19)*(1-abs(sign(c.colnum-19))) + ((c.seq-1)*80 + 20)*(1-abs(sign(c.colnum-20))) +
((c.seq-1)*80 + 21)*(1-abs(sign(c.colnum-21))) + ((c.seq-1)*80 + 22)*(1-abs(sign(c.colnum-22))) +
((c.seq-1)*80 + 23)*(1-abs(sign(c.colnum-23))) + ((c.seq-1)*80 + 24)*(1-abs(sign(c.colnum-24))) +
((c.seq-1)*80 + 25)*(1-abs(sign(c.colnum-25))) + ((c.seq-1)*80 + 26)*(1-abs(sign(c.colnum-26))) +
((c.seq-1)*80 + 27)*(1-abs(sign(c.colnum-27))) + ((c.seq-1)*80 + 28)*(1-abs(sign(c.colnum-28))) +
((c.seq-1)*80 + 29)*(1-abs(sign(c.colnum-29))) + ((c.seq-1)*80 + 30)*(1-abs(sign(c.colnum-30))) +
((c.seq-1)*80 + 31)*(1-abs(sign(c.colnum-31))) + ((c.seq-1)*80 + 32)*(1-abs(sign(c.colnum-32))) +
((c.seq-1)*80 + 33)*(1-abs(sign(c.colnum-33))) + ((c.seq-1)*80 + 34)*(1-abs(sign(c.colnum-34))) +
((c.seq-1)*80 + 35)*(1-abs(sign(c.colnum-35))) + ((c.seq-1)*80 + 36)*(1-abs(sign(c.colnum-36))) +
((c.seq-1)*80 + 37)*(1-abs(sign(c.colnum-37))) + ((c.seq-1)*80 + 38)*(1-abs(sign(c.colnum-38))) +
((c.seq-1)*80 + 39)*(1-abs(sign(c.colnum-39))) + ((c.seq-1)*80 + 40)*(1-abs(sign(c.colnum-40))) +
((c.seq-1)*80 + 41)*(1-abs(sign(c.colnum-41))) + ((c.seq-1)*80 + 42)*(1-abs(sign(c.colnum-42))) +
((c.seq-1)*80 + 43)*(1-abs(sign(c.colnum-43))) + ((c.seq-1)*80 + 44)*(1-abs(sign(c.colnum-44))) +
((c.seq-1)*80 + 45)*(1-abs(sign(c.colnum-45))) + ((c.seq-1)*80 + 46)*(1-abs(sign(c.colnum-46))) +
((c.seq-1)*80 + 47)*(1-abs(sign(c.colnum-47))) + ((c.seq-1)*80 + 48)*(1-abs(sign(c.colnum-48))) +
((c.seq-1)*80 + 49)*(1-abs(sign(c.colnum-49))) + ((c.seq-1)*80 + 50)*(1-abs(sign(c.colnum-50))) +
((c.seq-1)*80 + 51)*(1-abs(sign(c.colnum-51))) + ((c.seq-1)*80 + 52)*(1-abs(sign(c.colnum-52))) +
((c.seq-1)*80 + 53)*(1-abs(sign(c.colnum-53))) + ((c.seq-1)*80 + 54)*(1-abs(sign(c.colnum-54))) +
((c.seq-1)*80 + 55)*(1-abs(sign(c.colnum-55))) + ((c.seq-1)*80 + 56)*(1-abs(sign(c.colnum-56))) +
((c.seq-1)*80 + 57)*(1-abs(sign(c.colnum-57))) + ((c.seq-1)*80 + 58)*(1-abs(sign(c.colnum-58))) +
((c.seq-1)*80 + 59)*(1-abs(sign(c.colnum-59))) + ((c.seq-1)*80 + 60)*(1-abs(sign(c.colnum-60))) +
((c.seq-1)*80 + 61)*(1-abs(sign(c.colnum-61))) + ((c.seq-1)*80 + 62)*(1-abs(sign(c.colnum-62))) +
((c.seq-1)*80 + 63)*(1-abs(sign(c.colnum-63))) + ((c.seq-1)*80 + 64)*(1-abs(sign(c.colnum-64))) +
((c.seq-1)*80 + 65)*(1-abs(sign(c.colnum-65))) + ((c.seq-1)*80 + 66)*(1-abs(sign(c.colnum-66))) +
((c.seq-1)*80 + 67)*(1-abs(sign(c.colnum-67))) + ((c.seq-1)*80 + 68)*(1-abs(sign(c.colnum-68))) +
((c.seq-1)*80 + 69)*(1-abs(sign(c.colnum-69))) + ((c.seq-1)*80 + 70)*(1-abs(sign(c.colnum-70))) +
((c.seq-1)*80 + 71)*(1-abs(sign(c.colnum-71))) + ((c.seq-1)*80 + 72)*(1-abs(sign(c.colnum-72))) +
((c.seq-1)*80 + 73)*(1-abs(sign(c.colnum-73))) + ((c.seq-1)*80 + 74)*(1-abs(sign(c.colnum-74))) +
((c.seq-1)*80 + 75)*(1-abs(sign(c.colnum-75))) + ((c.seq-1)*80 + 76)*(1-abs(sign(c.colnum-76))) +
((c.seq-1)*80 + 77)*(1-abs(sign(c.colnum-77))) + ((c.seq-1)*80 + 78)*(1-abs(sign(c.colnum-78))) +
((c.seq-1)*80 + 79)*(1-abs(sign(c.colnum-79))) + ((c.seq-1)*80 + 80)*(1-abs(sign(c.colnum-80)))
,9),

/** And here is the Weight of the cell **/

str(
isnull(convert(real,s.c0)*(1-abs(sign(c.colnum-1 ))) ,0) + isnull(convert(real,s.c1)*(1-abs(sign(c.colnum-2 ))) ,0) +
isnull(convert(real,s.c2)*(1-abs(sign(c.colnum-3 ))) ,0) + isnull(convert(real,s.c3)*(1-abs(sign(c.colnum-4 ))) ,0) +
isnull(convert(real,s.c4)*(1-abs(sign(c.colnum-5 ))) ,0) + isnull(convert(real,s.c5)*(1-abs(sign(c.colnum-6 ))) ,0) +
isnull(convert(real,s.c6)*(1-abs(sign(c.colnum-7 ))) ,0) + isnull(convert(real,s.c7)*(1-abs(sign(c.colnum-8 ))) ,0) +
isnull(convert(real,s.c8)*(1-abs(sign(c.colnum-9 ))) ,0) + isnull(convert(real,s.c9)*(1-abs(sign(c.colnum-10))) ,0) +
isnull(convert(real,s.c10)*(1-abs(sign(c.colnum-11))) ,0) + isnull(convert(real,s.c11)*(1-abs(sign(c.colnum-12))) ,0) +
isnull(convert(real,s.c12)*(1-abs(sign(c.colnum-13))) ,0) + isnull(convert(real,s.c13)*(1-abs(sign(c.colnum-14))) ,0) +
isnull(convert(real,s.c14)*(1-abs(sign(c.colnum-15))) ,0) + isnull(convert(real,s.c15)*(1-abs(sign(c.colnum-16))) ,0) +
isnull(convert(real,s.c16)*(1-abs(sign(c.colnum-17))) ,0) + isnull(convert(real,s.c17)*(1-abs(sign(c.colnum-18))) ,0) +
isnull(convert(real,s.c18)*(1-abs(sign(c.colnum-19))) ,0) + isnull(convert(real,s.c19)*(1-abs(sign(c.colnum-20))) ,0) +
isnull(convert(real,s.c20)*(1-abs(sign(c.colnum-21))) ,0) + isnull(convert(real,s.c21)*(1-abs(sign(c.colnum-22))) ,0) +
isnull(convert(real,s.c22)*(1-abs(sign(c.colnum-23))) ,0) + isnull(convert(real,s.c23)*(1-abs(sign(c.colnum-24))) ,0) +
isnull(convert(real,s.c24)*(1-abs(sign(c.colnum-25))) ,0) + isnull(convert(real,s.c25)*(1-abs(sign(c.colnum-26))) ,0) +
isnull(convert(real,s.c26)*(1-abs(sign(c.colnum-27))) ,0) + isnull(convert(real,s.c27)*(1-abs(sign(c.colnum-28))) ,0) +
isnull(convert(real,s.c28)*(1-abs(sign(c.colnum-29))) ,0) + isnull(convert(real,s.c29)*(1-abs(sign(c.colnum-30))) ,0) +
isnull(convert(real,s.c30)*(1-abs(sign(c.colnum-31))) ,0) + isnull(convert(real,s.c31)*(1-abs(sign(c.colnum-32))) ,0) +
isnull(convert(real,s.c32)*(1-abs(sign(c.colnum-33))) ,0) + isnull(convert(real,s.c33)*(1-abs(sign(c.colnum-34))) ,0) +
isnull(convert(real,s.c34)*(1-abs(sign(c.colnum-35))) ,0) + isnull(convert(real,s.c35)*(1-abs(sign(c.colnum-36))) ,0) +
isnull(convert(real,s.c36)*(1-abs(sign(c.colnum-37))) ,0) + isnull(convert(real,s.c37)*(1-abs(sign(c.colnum-38))) ,0) +
isnull(convert(real,s.c38)*(1-abs(sign(c.colnum-39))) ,0) + isnull(convert(real,s.c39)*(1-abs(sign(c.colnum-40))) ,0) +
isnull(convert(real,s.c40)*(1-abs(sign(c.colnum-41))) ,0) + isnull(convert(real,s.c41)*(1-abs(sign(c.colnum-42))) ,0) +
isnull(convert(real,s.c42)*(1-abs(sign(c.colnum-43))) ,0) + isnull(convert(real,s.c43)*(1-abs(sign(c.colnum-44))) ,0) +
isnull(convert(real,s.c44)*(1-abs(sign(c.colnum-45))) ,0) + isnull(convert(real,s.c45)*(1-abs(sign(c.colnum-46))) ,0) +
isnull(convert(real,s.c46)*(1-abs(sign(c.colnum-47))) ,0) + isnull(convert(real,s.c47)*(1-abs(sign(c.colnum-48))) ,0) +
isnull(convert(real,s.c48)*(1-abs(sign(c.colnum-49))) ,0) + isnull(convert(real,s.c49)*(1-abs(sign(c.colnum-50))) ,0) +
isnull(convert(real,s.c50)*(1-abs(sign(c.colnum-51))) ,0) + isnull(convert(real,s.c51)*(1-abs(sign(c.colnum-52))) ,0) +
isnull(convert(real,s.c52)*(1-abs(sign(c.colnum-53))) ,0) + isnull(convert(real,s.c53)*(1-abs(sign(c.colnum-54))) ,0) +
isnull(convert(real,s.c54)*(1-abs(sign(c.colnum-55))) ,0) + isnull(convert(real,s.c55)*(1-abs(sign(c.colnum-56))) ,0) +
isnull(convert(real,s.c56)*(1-abs(sign(c.colnum-57))) ,0) + isnull(convert(real,s.c57)*(1-abs(sign(c.colnum-58))) ,0) +
isnull(convert(real,s.c58)*(1-abs(sign(c.colnum-59))) ,0) + isnull(convert(real,s.c59)*(1-abs(sign(c.colnum-60))) ,0) +
isnull(convert(real,s.c60)*(1-abs(sign(c.colnum-61))) ,0) + isnull(convert(real,s.c61)*(1-abs(sign(c.colnum-62))) ,0) +
isnull(convert(real,s.c62)*(1-abs(sign(c.colnum-63))) ,0) + isnull(convert(real,s.c63)*(1-abs(sign(c.colnum-64))) ,0) +
isnull(convert(real,s.c64)*(1-abs(sign(c.colnum-65))) ,0) + isnull(convert(real,s.c65)*(1-abs(sign(c.colnum-66))) ,0) +
isnull(convert(real,s.c66)*(1-abs(sign(c.colnum-67))) ,0) + isnull(convert(real,s.c67)*(1-abs(sign(c.colnum-68))) ,0) +
isnull(convert(real,s.c68)*(1-abs(sign(c.colnum-69))) ,0) + isnull(convert(real,s.c69)*(1-abs(sign(c.colnum-70))) ,0) +
isnull(convert(real,s.c70)*(1-abs(sign(c.colnum-71))) ,0) + isnull(convert(real,s.c71)*(1-abs(sign(c.colnum-72))) ,0) +
isnull(convert(real,s.c72)*(1-abs(sign(c.colnum-73))) ,0) + isnull(convert(real,s.c73)*(1-abs(sign(c.colnum-74))) ,0) +
isnull(convert(real,s.c74)*(1-abs(sign(c.colnum-75))) ,0) + isnull(convert(real,s.c75)*(1-abs(sign(c.colnum-76))) ,0) +
isnull(convert(real,s.c76)*(1-abs(sign(c.colnum-77))) ,0) + isnull(convert(real,s.c77)*(1-abs(sign(c.colnum-78))) ,0) +
isnull(convert(real,s.c78)*(1-abs(sign(c.colnum-79))) ,0) + isnull(convert(real,s.c79)*(1-abs(sign(c.colnum-80))) ,0)
,10,8),

/** And finally, here is the Value of the cell **/

substring(convert(varbinary(255),v.c0),(1-abs(sign(c.colnum-1 ))) ,255) + substring(convert(varbinary(255),v.c1),(1-abs(sign(c.colnum-2 ))) ,255) +
substring(convert(varbinary(255),v.c2),(1-abs(sign(c.colnum-3 ))) ,255) + substring(convert(varbinary(255),v.c3),(1-abs(sign(c.colnum-4 ))) ,255) +
substring(convert(varbinary(255),v.c4),(1-abs(sign(c.colnum-5 ))) ,255) + substring(convert(varbinary(255),v.c5),(1-abs(sign(c.colnum-6 ))) ,255) +
substring(convert(varbinary(255),v.c6),(1-abs(sign(c.colnum-7 ))) ,255) + substring(convert(varbinary(255),v.c7),(1-abs(sign(c.colnum-8 ))) ,255) +
substring(convert(varbinary(255),v.c8),(1-abs(sign(c.colnum-9 ))) ,255) + substring(convert(varbinary(255),v.c9),(1-abs(sign(c.colnum-10))) ,255) +
substring(convert(varbinary(255),v.c10),(1-abs(sign(c.colnum-11))) ,255) + substring(convert(varbinary(255),v.c11),(1-abs(sign(c.colnum-12))) ,255) +
substring(convert(varbinary(255),v.c12),(1-abs(sign(c.colnum-13))) ,255) + substring(convert(varbinary(255),v.c13),(1-abs(sign(c.colnum-14))) ,255) +
substring(convert(varbinary(255),v.c14),(1-abs(sign(c.colnum-15))) ,255) + substring(convert(varbinary(255),v.c15),(1-abs(sign(c.colnum-16))) ,255) +
substring(convert(varbinary(255),v.c16),(1-abs(sign(c.colnum-17))) ,255) + substring(convert(varbinary(255),v.c17),(1-abs(sign(c.colnum-18))) ,255) +
substring(convert(varbinary(255),v.c18),(1-abs(sign(c.colnum-19))) ,255) + substring(convert(varbinary(255),v.c19),(1-abs(sign(c.colnum-20))) ,255) +
substring(convert(varbinary(255),v.c20),(1-abs(sign(c.colnum-21))) ,255) + substring(convert(varbinary(255),v.c21),(1-abs(sign(c.colnum-22))) ,255) +
substring(convert(varbinary(255),v.c22),(1-abs(sign(c.colnum-23))) ,255) + substring(convert(varbinary(255),v.c23),(1-abs(sign(c.colnum-24))) ,255) +
substring(convert(varbinary(255),v.c24),(1-abs(sign(c.colnum-25))) ,255) + substring(convert(varbinary(255),v.c25),(1-abs(sign(c.colnum-26))) ,255) +
substring(convert(varbinary(255),v.c26),(1-abs(sign(c.colnum-27))) ,255) + substring(convert(varbinary(255),v.c27),(1-abs(sign(c.colnum-28))) ,255) +
substring(convert(varbinary(255),v.c28),(1-abs(sign(c.colnum-29))) ,255) + substring(convert(varbinary(255),v.c29),(1-abs(sign(c.colnum-30))) ,255) +
substring(convert(varbinary(255),v.c30),(1-abs(sign(c.colnum-31))) ,255) + substring(convert(varbinary(255),v.c31),(1-abs(sign(c.colnum-32))) ,255) +
substring(convert(varbinary(255),v.c32),(1-abs(sign(c.colnum-33))) ,255) + substring(convert(varbinary(255),v.c33),(1-abs(sign(c.colnum-34))) ,255) +
substring(convert(varbinary(255),v.c34),(1-abs(sign(c.colnum-35))) ,255) + substring(convert(varbinary(255),v.c35),(1-abs(sign(c.colnum-36))) ,255) +
substring(convert(varbinary(255),v.c36),(1-abs(sign(c.colnum-37))) ,255) + substring(convert(varbinary(255),v.c37),(1-abs(sign(c.colnum-38))) ,255) +
substring(convert(varbinary(255),v.c38),(1-abs(sign(c.colnum-39))) ,255) + substring(convert(varbinary(255),v.c39),(1-abs(sign(c.colnum-40))) ,255) +
substring(convert(varbinary(255),v.c40),(1-abs(sign(c.colnum-41))) ,255) + substring(convert(varbinary(255),v.c41),(1-abs(sign(c.colnum-42))) ,255) +
substring(convert(varbinary(255),v.c42),(1-abs(sign(c.colnum-43))) ,255) + substring(convert(varbinary(255),v.c43),(1-abs(sign(c.colnum-44))) ,255) +
substring(convert(varbinary(255),v.c44),(1-abs(sign(c.colnum-45))) ,255) + substring(convert(varbinary(255),v.c45),(1-abs(sign(c.colnum-46))) ,255) +
substring(convert(varbinary(255),v.c46),(1-abs(sign(c.colnum-47))) ,255) + substring(convert(varbinary(255),v.c47),(1-abs(sign(c.colnum-48))) ,255) +
substring(convert(varbinary(255),v.c48),(1-abs(sign(c.colnum-49))) ,255) + substring(convert(varbinary(255),v.c49),(1-abs(sign(c.colnum-50))) ,255) +
substring(convert(varbinary(255),v.c50),(1-abs(sign(c.colnum-51))) ,255) + substring(convert(varbinary(255),v.c51),(1-abs(sign(c.colnum-52))) ,255) +
substring(convert(varbinary(255),v.c52),(1-abs(sign(c.colnum-53))) ,255) + substring(convert(varbinary(255),v.c53),(1-abs(sign(c.colnum-54))) ,255) +
substring(convert(varbinary(255),v.c54),(1-abs(sign(c.colnum-55))) ,255) + substring(convert(varbinary(255),v.c55),(1-abs(sign(c.colnum-56))) ,255) +
substring(convert(varbinary(255),v.c56),(1-abs(sign(c.colnum-57))) ,255) + substring(convert(varbinary(255),v.c57),(1-abs(sign(c.colnum-58))) ,255) +
substring(convert(varbinary(255),v.c58),(1-abs(sign(c.colnum-59))) ,255) + substring(convert(varbinary(255),v.c59),(1-abs(sign(c.colnum-60))) ,255) +
substring(convert(varbinary(255),v.c60),(1-abs(sign(c.colnum-61))) ,255) + substring(convert(varbinary(255),v.c61),(1-abs(sign(c.colnum-62))) ,255) +
substring(convert(varbinary(255),v.c62),(1-abs(sign(c.colnum-63))) ,255) + substring(convert(varbinary(255),v.c63),(1-abs(sign(c.colnum-64))) ,255) +
substring(convert(varbinary(255),v.c64),(1-abs(sign(c.colnum-65))) ,255) + substring(convert(varbinary(255),v.c65),(1-abs(sign(c.colnum-66))) ,255) +
substring(convert(varbinary(255),v.c66),(1-abs(sign(c.colnum-67))) ,255) + substring(convert(varbinary(255),v.c67),(1-abs(sign(c.colnum-68))) ,255) +
substring(convert(varbinary(255),v.c68),(1-abs(sign(c.colnum-69))) ,255) + substring(convert(varbinary(255),v.c69),(1-abs(sign(c.colnum-70))) ,255) +
substring(convert(varbinary(255),v.c70),(1-abs(sign(c.colnum-71))) ,255) + substring(convert(varbinary(255),v.c71),(1-abs(sign(c.colnum-72))) ,255) +
substring(convert(varbinary(255),v.c72),(1-abs(sign(c.colnum-73))) ,255) + substring(convert(varbinary(255),v.c73),(1-abs(sign(c.colnum-74))) ,255) +
substring(convert(varbinary(255),v.c74),(1-abs(sign(c.colnum-75))) ,255) + substring(convert(varbinary(255),v.c75),(1-abs(sign(c.colnum-76))) ,255) +
substring(convert(varbinary(255),v.c76),(1-abs(sign(c.colnum-77))) ,255) + substring(convert(varbinary(255),v.c77),(1-abs(sign(c.colnum-78))) ,255) +
substring(convert(varbinary(255),v.c78),(1-abs(sign(c.colnum-79))) ,255) + substring(convert(varbinary(255),v.c79),(1-abs(sign(c.colnum-80))) ,255)
from #cells c, sysstatistics s, sysstatistics v
where s.id = @tabid
and s.colidarray = convert(varbinary(1),convert(tinyint,@colid))
and s.formatid = 104
and v.id =* s.id
and v.colidarray =* s.colidarray
and v.statid =* s.statid
and v.sequence =* s.sequence
and v.formatid = 102
and c.seq = s.sequence
for read only

/** Wow, I'm glad that's over **/
/** Let's get on with the business at hand **/

print "%1!",@proc_version
print "%1!",@@version
print ''

/** Standard optdiag output **/
begin
print 'Server name: "%1!"',@@servername
print ''
print 'Specified database: "%1!"',@s_dbname
if (@s_tabowner is null)
print 'Specified table owner: not specified'
else
print 'Specified table owner: "%1!"',@s_tabowner
if (@s_tabname is null)
print 'Specified table: not specified'
else
print 'Specified table: "%1!"',@s_tabname
if (@colname is null)
print 'Specified column: not specified'
else
print 'Specified column: "%1!"',@colname
print ''

/*
** Check to see if the @tabname is in sysobjects.
*/

open object_cursor

fetch object_cursor into
@tabid, @u_dbname, @u_dbid,
@u_tabowner, @u_tabname

while (@@sqlstatus = 0)
begin
print 'Table owner: "%1!"',@u_tabowner
print 'Table name: "%1!"',@u_tabname
print ''

dbcc flushstats(@u_dbid, @tabid)

select @ptn_data_pgs = convert(int, max(ptn_data_pgs(@tabid, partitionid)))
from syspartitions
where id = @tabid

---------------------
-- Work on Indexes --
---------------------
open index_cursor
fetch index_cursor into
@indid ,@index_name ,@dol_clustered, @clustered, @keycnt

while (@@sqlstatus = 0)
begin
select @keycnt = @keycnt - isnull(abs(sign(@clustered - 1)),0)
,@index_cols = null
while (@keycnt > 0)
begin
select @index_cols = substring(', ' ,abs(sign(@keycnt - 1)),2)
+ '"' + index_col(@u_tabname, @indid, @keycnt, user_id(@u_tabowner)) + '"'
+ @index_cols
select @keycnt = @keycnt - 1
end
select @leafcnt = ltrim(convert(varchar(32),convert(int,leafcnt))),
@pagecnt = ltrim(convert(varchar(32),convert(int,pagecnt))),
@emptypgcnt = ltrim(convert(varchar(32),convert(int,emptypgcnt))),
@rowcnt = ltrim(convert(varchar(32),str(round(convert(double precision,rowcnt),16),32,16))),
@forwrowcnt = ltrim(convert(varchar(32),str(round(convert(double precision,forwrowcnt),16),32,16))),
@delrowcnt = ltrim(convert(varchar(32),str(round(convert(double precision,delrowcnt),16),32,16))),
@dpagecrcnt = ltrim(convert(varchar(32),str(round(convert(double precision,dpagecrcnt),16),32,16))),
@dpagecr = ltrim(convert(varchar(32),str(round(convert(double precision,dpagecrcnt),16),32,16))),
@ipagecrcnt = ltrim(convert(varchar(32),str(round(convert(double precision,ipagecrcnt),16),32,16))),
@ipagecr = ltrim(convert(varchar(32),str(round(convert(double precision,ipagecrcnt),16),32,16))),
@drowcrcnt = ltrim(convert(varchar(32),str(round(convert(double precision,drowcrcnt),16),32,16))),
@drowcr = ltrim(convert(varchar(32),str(round(convert(double precision,drowcrcnt),16),32,16))),
@oamapgcnt = ltrim(convert(varchar(32),convert(int,oamapgcnt))),
@extent0pgcnt = ltrim(convert(varchar(32),convert(int,extent0pgcnt))),
@datarowsize = ltrim(convert(varchar(32),str(round(convert(double precision,datarowsize),16),32,16))),
@leafrowsize = ltrim(convert(varchar(32),str(round(convert(double precision,leafrowsize),16),32,16))),
@indexheight = ltrim(convert(varchar(32),convert(smallint,indexheight))),
@spare1 = ltrim(convert(varchar(32),convert(int,spare1))),
@spare2 = ltrim(convert(varchar(32),str(round(convert(double precision,spare2),16),32,16)))
from systabstats
where id = @tabid and indid = @indid

----------------------
-- print index info --
----------------------

if (@indid = 0)
print 'Statistics for table: "%1!"',@index_name
else if (1 in (@clustered,@dol_clustered))
print 'Statistics for index: "%1!" (clustered)',@index_name
else
print 'Statistics for index: "%1!" (nonclustered)',@index_name
if (@indid > 0)
print 'Index column list: %1!',@index_cols
else
print ''
if (@clustered = 1 or @indid = 0)
print ' Data page count: %1!',@pagecnt
else
print ' Leaf count: %1!',@leafcnt

if (1 in (@clustered,@dol_clustered) or @indid = 0)
print ' Empty data page count: %1!',@emptypgcnt
else
print ' Empty leaf page count: %1!',@emptypgcnt

if (@clustered = 1 or @indid = 0)
begin
print ' Data row count: %1!',@rowcnt
print ' Forwarded row count: %1!',@forwrowcnt
print ' Deleted row count: %1!',@delrowcnt
end

print ' Data page CR count: %1!',@dpagecrcnt
if ((@clustered = 0 or @dol_clustered = 1) and @indid > 0)
begin
print ' Index page CR count: %1!',@ipagecrcnt
print ' Data row CR count: %1!',@drowcrcnt
end

if (@clustered = 1 or @indid = 0)
print ' OAM + allocation page count: %1!',@oamapgcnt

if (@indid = 0)
print ' First extent data pages: %1!',@extent0pgcnt
else
print ' First extent leaf pages: %1!',@extent0pgcnt
if (@clustered = 1 or @indid = 0)
print ' Data row size: %1!',@datarowsize
else
print ' Leaf row size: %1!',@leafrowsize
if (@indid > 0)
print ' Index height: %1!',@indexheight
if ((@clustered = 1 or @indid = 0) and @ptn_data_pgs is not null)
print ' Pages in largest partition: %1!',@ptn_data_pgs

print ''
print ' Derived statistics:'

if ( (select lower(@option)) in ("cr","cluster ratio") )
begin
print ' Data page cluster ratio: proprietary'
end
else
print ' Data page cluster ratio: proprietary'
if ((@clustered = 0 or @dol_clustered = 1) and @indid > 0)
begin
print ' Index page cluster ratio: proprietary'
print ' Data row cluster ratio: proprietary'
end
print ''

fetch index_cursor into
@indid ,@index_name ,@dol_clustered ,@clustered, @keycnt
end
close index_cursor

---------------------
-- Work on Columns --
---------------------
open col_cursor
fetch col_cursor into
@colid, @colidarray, @colidarray_len, @colname, @statid, @c1stat, @last_updt, @rc_density, @tot_density
,@steps_act, @steps_req, @typename, @collength, @precision, @scale, @r_sel, @between_sel

while (@@sqlstatus = 0)
begin
if (@steps_act is not null)
print 'Statistics for column: "%1!"',@colname
else
begin -- BUILD A COLUMN GROUP NAME
select @colgroup_name = null
while (@colidarray_len > 0)
begin
select @colgroup_name =
substring(', ' ,abs(sign(@colidarray_len - 1)),2)
+ '"' + name + '"'
+ @colgroup_name
from syscolumns
where id = @tabid
and colid = convert(tinyint,substring(@colidarray,@colidarray_len,1))
select @colidarray_len = @colidarray_len - 1
end
print 'Statistics for column group: %1!',@colgroup_name
end
print 'Last update of column statistics: %1!',@last_updt
if (@c1stat & 2 = 2)
print 'Statistics loaded from Optdiag.'
print ''
print ' Range cell density: %1!',@rc_density
print ' Total density: %1!',@tot_density
if (@r_sel is not null)
print ' Range selectivity: %1!',@r_sel
else
print ' Range selectivity: default used (0.33)'
if (@between_sel is not null)
print ' In between selectivity: %1!',@between_sel
else
print ' In between selectivity: default used (0.25)'
print ''
if (@steps_act is not null) /** Print a Histogram **/
begin
truncate table #cells
select @freq_cell = 0, @seq = 1
select @used_count = isnull(sum(usedcount),0)
from sysstatistics
where id = @tabid
and statid = @statid
and colidarray = convert(varbinary(1),convert(tinyint,@colid))
and formatid = 104
and sequence = @seq
while (@used_count > 0)
begin
select @rownum = 1
while (@rownum <= @used_count)
begin
insert into #cells(seq,colnum) values (@seq,@rownum)
select @rownum = @rownum + 1
end
select @seq = @seq + 1
select @used_count = isnull(sum(usedcount),0)
from sysstatistics
where id = @tabid
and statid = @statid
and colidarray = convert(varbinary(1),convert(tinyint,@colid))
and formatid = 104
and sequence = @seq
end

print 'Histogram for column: "%1!"',@colname
if (@typename in ("int","intn"))
select @typename = "integer"
if (@typename = "float" and @collength = "4")
select @typename = "real"
if (@typename = "float" and @collength = "8")
select @typename = "double precision"
if (@typename in ("varchar","nvarchar","char","nchar","binary","varbinary","float","floatn"))
print 'Column datatype: %1!(%2!)',@typename,@collength
else if (@typename in ("numeric","decimal","numericn","decimaln"))
print 'Column datatype: %1!(%2!,%3!)',@typename,@precision,@scale
else
print 'Column datatype: %1!',@typename
print 'Requested step count: %1!',@steps_req
print 'Actual step count: %1!',@steps_act
print ''
print ' Step Weight Value'
print ''

open histogram_cursor
fetch histogram_cursor into
@step, @weight, @value_raw
while (@@sqlstatus = 0)
begin
select
@value_c =
CASE
WHEN @typename in ("varchar","nvarchar","char","nchar")
THEN '"' + convert(varchar(255),@value_raw) + '"'

WHEN @typename in ("int","intn","integer")
THEN str(convert(int,@value_raw),10)

WHEN @typename in ("smallint")
THEN str(convert(smallint,@value_raw),10)

WHEN @typename in ("tinyint")
THEN str(convert(tinyint,@value_raw),10)

/** Oh, oh, a scaled numeric, where does the decimal place go??? **/
WHEN (@typename in ("numeric","decimal","numericn","decimaln") and convert(smallint,@scale) > 0)
THEN str(convert(numeric(38),right(replicate(0x00,255-convert(smallint,@collength)) + @value_raw,17))
/* move over @scale decimal places please */
/power(convert(numeric,10),convert(smallint,@scale))
/* make room for @precision, minus, and decimal signs */
, convert(smallint,@precision)+2,convert(smallint,@scale))

WHEN (@typename in ("numeric","decimal","numericn","decimaln") and @scale = "0")
THEN str(convert(numeric(38),right(replicate(0x00,255-convert(smallint,@collength)) + @value_raw,17))
, convert(smallint,@precision))

WHEN (@typename in ("float","floatn","real") and @collength = "4")
THEN str(convert(real,@value_raw),40,8)

WHEN (@typename in ("float","floatn","double precision") and @collength = "8")
THEN str(convert(double precision,@value_raw),40,16)

WHEN @typename in ("money","moneyn","smallmoney")
THEN str(convert(money,@value_raw),22,2)

WHEN @typename in ("datetime","datetimn")
THEN '"' + convert(varchar(255),convert(datetime,@value_raw),109) + '"'

WHEN @typename in ("smalldatetime")
THEN '"' + convert(varchar(255),convert(smalldatetime,@value_raw),100) + '"'

ELSE @value_raw
END

if (@value_raw is null)
select @freq_cell =1 , @prev_step = @step, @prev_weight = @weight, @value_c = "null"
else
begin
select @value_c = ltrim(@value_c)
if (@freq_cell = 1)
begin /* Printing a frequency cell */
if (@typename in ("binary","varbinary","timestamp"))
begin
print '%1! %2! < %3!',@prev_step,@prev_weight,@value_raw
print '%1! %2! = %3!',@step,@weight,@value_raw
end
else
begin
print '%1! %2! < %3!',@prev_step,@prev_weight,@value_c
print '%1! %2! = %3!',@step,@weight,@value_c
end
end
else /* NOT printing a frequency cell */
begin
if (@typename in ("binary","varbinary","timestamp"))
print '%1! %2! <= %3!',@step,@weight,@value_raw
else
print '%1! %2! <= %3!',@step,@weight,@value_c
end
select @freq_cell = 0
end

fetch histogram_cursor into
@step, @weight, @value_raw
end
close histogram_cursor
/* Is there only one cell (a freqency cell) */
if (@freq_cell = 1)
print '%1! %2! = %3!',@prev_step,@prev_weight,@value_c
print ''
end /* histogram print */

fetch col_cursor into
@colid, @colidarray, @colidarray_len, @colname, @statid, @c1stat, @last_updt, @rc_density, @tot_density
,@steps_act, @steps_req, @typename, @collength, @precision, @scale, @r_sel, @between_sel
end
close col_cursor
-----------------------
-- Done with columns --
-----------------------

------------------------------
-- print cols with no stats --
------------------------------
select @keycnt = 0
open nostats_cursor
fetch nostats_cursor into @colname
while (@@sqlstatus = 0)
begin
select @keycnt = @keycnt + 1
if (@keycnt = 1)
print 'No statistics for remaining columns: "%1!"',@colname
else if (@keycnt = 2)
print '(default values used) "%1!"',@colname
else
print ' "%1!"',@colname
fetch nostats_cursor into @colname
end
close nostats_cursor
if (@keycnt = 1)
print '(default values used)'

print ''

fetch object_cursor into
@tabid, @u_dbname, @u_dbid,
@u_tabowner, @u_tabname
end
close object_cursor
-----------------------
-- Done with Objects --
-----------------------
end

go

grant execute on sp__optdiag to public
go
use sybsystemprocs
go
drop procedure sp__rev_configure
go
create procedure sp__rev_configure
as
declare @sptlang int /* current sessions language */
declare @whichone int /* using english or default lang ? */

if @@trancount = 0
begin
set transaction isolation level 1
set chained off
end

select @whichone = 0

select @sptlang = @@langid

if @@langid != 0
begin
if not exists (
select * from master.dbo.sysmessages where error
between 17015 and 17049
and langid = @@langid)
select @sptlang = 0
else
if not exists (
select * from master.dbo.sysmessages where error
between 17100 and 17109
and langid = @@langid)
select @sptlang = 0
end

if @sptlang = 0
begin
select "-- sp_configure settings"
= "sp_configure '" + name + "', "
+ convert( char(12), c.value)
+ char(13) + char(10) + "go"
from master.dbo.spt_values a,
master.dbo.syscurconfigs c
where a.type = "C"
and a.number *= c.config
and a.number >= 0
end
else
begin
select "-- sp_configure settings"
= "sp_configure '" + name + "', "
+ convert(char(12), c.value)
+ char(13) + char(10) + "go"
from master.dbo.spt_values a,
master.dbo.syscurconfigs c,
master.dbo.sysmessages d
where type = "C"
and a.number *= c.config
and a.number >= 0
and msgnum = error and isnull(langid, 0) = @sptlang
end
return (0)
go
--
-- You may or may not wish to do the following.
--
--grant execute on sp__rev_configure to public
--gouse sybsystemprocs
go

/*
* DROP PROC sp__revroles
*/
IF OBJECT_ID('sp__revroles') IS NOT NULL
BEGIN
DROP PROC sp__revroles
PRINT '<<< Dropped proc sp__revroles >>>'
END
go
create procedure sp__revroles
as
/* Created 03/05/97 by Clayton Groom
creates a reverse engineered set of commands to restore user roles
*/
select "exec sp_role grant, " + u.name + ", " + s.name + char(13) + char(10) + "go"
from master..syssrvroles s,
sysroles r,
sysusers u
where r.id = s.srid
and r.lrid = u.uid
and s.name <> u.name
go

IF OBJECT_ID('sp__revroles') IS NOT NULL
PRINT '<<< Created proc sp__revroles >>>'
ELSE
PRINT '<<< Failed to create proc sp__revroles >>>'
go
use sybsystemprocs
go

if object_id('sp_days') is not NULL
drop proc sp_days
go

create proc sp_days @days tinyint OUTPUT, @month tinyint, @year smallint
as
declare @date datetime
select @date=convert(char,@month)+'/01/'+convert(char, @year)
select @days=datediff(dd,@date, dateadd(mm,1,@date))
select @days
go

grant exec on sp_days to public
gouse sybsystemprocs
go

if object_id('dbo.sp_ddl_create_table') is not null
drop procedure sp_ddl_create_table
print "Dropping sp_ddl_create_table"
go

create proc sp_ddl_create_table
as

-- Creates the DDL for all the user tables in the
-- current database

select right('create table ' + so1.name + '(' + '
', 255 * ( abs( sign(sc1.colid - 1) - 1 ) ) )+
sc1.name + ' ' +
st1.name + ' ' +
substring( '(' + rtrim( convert( char, sc1.length ) ) + ') ', 1,
patindex('%char', st1.name ) * 10 ) +
substring( '(' + rtrim( convert( char, sc1.prec ) ) + ', ' + rtrim(
convert( char, sc1.scale ) ) + ') ' , 1, patindex('numeric', st1.name ) * 10 ) +
substring( 'NOT NULL', ( convert( int, convert( bit,( sc1.status & 8 ) ) ) * 4 ) + 1,
8 * abs(convert(bit, (sc1.status & 0x80)) - 1 ) ) +
right('identity ', 9 * convert(bit, (sc1.status & 0x80)) ) +
right(',', 5 * ( convert(int,sc2.colid) - convert(int,sc1.colid) ) ) +
right(' )
' + 'go' + '
' + '
', 255 * abs( sign( ( convert(int,sc2.colid) - convert(int,sc1.colid) ) ) -
1 ) )
from sysobjects so1,
syscolumns sc1,
syscolumns sc2,
systypes st1
where so1.type = 'U'
and sc1.id = so1.id
and st1.usertype = sc1.usertype
and sc2.id = sc1.id
and sc2.colid = (select max(colid)
from syscolumns
where id = sc1.id)
order by so1.name, sc1.colid
go

if object_id('dbo.sp_ddl_create_table') is not null
begin
grant execute on sp_ddl_create_table to public
print "Created sp_ddl_create_table"
end
else
print "Failed to create sp_ddl_create_table"
go

goIF OBJECT_ID('sp_desc') IS NOT NULL
BEGIN
DROP PROCEDURE sp_desc
IF OBJECT_ID('sp_desc') IS NOT NULL
PRINT '<<< FAILED DROPPING PROCEDURE sp_desc >>>'
ELSE
PRINT '<<< DROPPED PROCEDURE sp_desc >>>'
END
go

create procedure sp_desc @table_name char(30) = NULL
--
-- Snarfed from CDS, cannot remember who posted the original.
-- Update for dec and numeric data types, plus ensured that
-- varchars came out as that.
--
-- David Owen 2001 (do...@midsomer.org)

as
-- This stored procedure returns a description of a SQL Server table in
-- a format more like the Oracle DESC command.

if (@table_name IS NULL)
begin
raiserror 20001 "Must specify table name for sp_desc!"
return
end

declare @min_id int

select
C.colid 'column_id',
C.name 'column_name',
T.name 'column_type',
T.usertype 'user_type',
T.type 'base_type',
C.length 'column_length',
C.scale 'column_scale',
C.status 'column_is_null'
into
#tab_descr
from
syscolumns C,
sysobjects O,
systypes T
where
C.id = O.id
and C.usertype = T.usertype
and O.name = @table_name

if (@@rowcount = 0)
begin
raiserror 20001 "Table specified does not exist"
return
end

update
#tab_descr
set
user_type = systypes.usertype
from
systypes
where
systypes.type = #tab_descr.base_type
and systypes.usertype < 100

-- update
-- #tab_descr
-- set
-- column_type = name
-- from
-- systypes
-- where
-- #tab_descr.user_type = systypes.usertype

update
#tab_descr
set
column_type = name
from
systypes st,
#tab_descr td
where td.base_type = st.type
and td.user_type > 100

update
#tab_descr
set
column_type = column_type + "(" + LTRIM(RTRIM(str(column_length)))+")"
where
column_type in ("char", "varchar", "nchar", "nvarchar", "binary", "varbinary")

update
#tab_descr
set
column_type = column_type + "(" +
LTRIM(RTRIM(str(column_length))) +
"," +
LTRIM(RTRIM(str(column_scale))) +
")"
where
column_type in ("dec", "numeric", "decimal")

-- update
-- #tab_descr
-- set
-- column_type = "varchar("+LTRIM(RTRIM(str(column_length)))+")"
-- where
-- column_type = "sysname"

select
@min_id = min(column_id)
from
#tab_descr

update
#tab_descr
set
column_id = column_id - @min_id + 1

print @table_name

select
convert(char(5), "("+LTRIM(str(column_id))+")") 'No.',
column_name 'Column Name',
convert(char(20), column_type) 'Datatype',
case column_is_null
when 0 then "NOT NULL"
else ""
end
from
#tab_descr
order by column_id
go

IF OBJECT_ID('dbo.sp_desc') IS NOT NULL
BEGIN
PRINT '<<< CREATED PROCEDURE dbo.sp_desc >>>'
GRANT EXECUTE ON dbo.sp_desc TO public
END
ELSE
PRINT '<<< FAILED CREATING PROCEDURE dbo.sp_desc >>>'
go
use sybsystemprocs
go
/*
* DROP PROC dbo.sp_devusage
*/
IF OBJECT_ID('dbo.sp_devusage') IS NOT NULL
BEGIN
DROP PROC dbo.sp_devusage
PRINT '<<< DROPPED PROC dbo.sp_devusage >>>'
END
go
CREATE PROCEDURE sp_devusage (@device_name char(30) = NULL)
AS
IF @device_name != NULL
BEGIN
SELECT dev_name = substring(dv.name,1,20),db_name = substring(db.name,1,20),
size_mb = u.size/512.0,
u.segmap,
vdevno = u.vstart/power(2,24)
FROM master..sysusages u , master..sysdevices dv,
master..sysdatabases db
WHERE u.vstart between dv.low and dv.high
AND db.dbid = u.dbid
AND cntrltype = 0
AND dv.name = @device_name
ORDER BY dv.name
COMPUTE sum(u.size/512.0) by dv.name
END
ELSE
BEGIN
SELECT dev_name = substring(dv.name,1,20),db_name = substring(db.name,1,20),
size_mb = u.size/512.0, u.segmap,
vdevno = u.vstart/power(2,24)
FROM master..sysusages u , master..sysdevices dv,
master..sysdatabases db
WHERE u.vstart between dv.low and dv.high
AND db.dbid = u.dbid
AND cntrltype = 0
ORDER BY dv.name
COMPUTE sum(u.size/512.0) by dv.name
END
go

IF OBJECT_ID('dbo.sp_devusage') IS NOT NULL
PRINT '<<< CREATED PROC dbo.sp_devusage >>>'
ELSE
PRINT '<<< FAILED CREATING PROC dbo.sp_devusage >>>'
go
/*
* Granting/Revoking Permissions on dbo.sp_devusage
*/
GRANT EXECUTE ON dbo.sp_devusage TO public
go

/*>>>>>>>>>>>>>>>>>>>>>>>>>>> sp_dos <<<<<<<<<<<<<<<<<<<<<<<<<<<<<*/
IF OBJECT_ID('dbo.sp_dos') IS NOT NULL
DROP PROCEDURE sp_dos
go

CREATE PROCEDURE sp_dos
@vcObjectName varchar(30) = NULL
AS
/***********************************************************************
* sp_dos - Display Object Scope
* This procedure graphically displays the scope of a object in
* the database.
*
* Copyright 1996, all rights reserved.
*
* Author: David W. Pledger, Strategic Data Systems, Inc.
*
* Parameters
* ----------------------------------------------------------------
* Name In/Out Description
* ----------------------------------------------------------------
* @vcObjectName In Mandatory - The exact name of a single
* database object for which the call
* hierarchy is to be extracted.
*
* Selected Data
* A sample report follows:
* ----------------------------------------------------------------
*
* SCOPE OF EFFECT FOR OBJECT: ti_users
* +------------------------------------------------------------------+
* (T) ti_users (Trigger on table 'users')
* |
* +--(P) pUT_GetError
* | |
* | +--(U) ui_error
* |
* +--(U) BGRP
* |
* +--(U) user_information (See Triggers: tu_user_information)
* |
* +--(U) users (See Triggers: ti_users, tu_users, td_users)
* |
* +--(P) pUT_LUDVersion
* |
* +--(P) pUT_GetError
* | |
* | +--(U) ui_error
* |
* +--(U) BGRP_LUDVersion
*
* <End of Sample>
*
* Return Values
* ----------------------------------------------------------------
* Value Description
* ----------------------------------------------------------------
* < -99 Unexpected error - should never occur.
*
* -99 to -1 Sybase **reserved** return status values.
*
* 0 Execution succeeded
*
* 1 Execution of this procedure failed.
*
* > 1 Unexpected error - should never occur.
*
***********************************************************************/
BEGIN

/*------------------- Local Declarations -------------------------*/
DECLARE @iObjectID int /* System ID of object */
DECLARE @cObjectType char(1) /* System Object Type code */
DECLARE @vcName varchar(30) /* System Object name */
DECLARE @vcMsg varchar(255) /* Error Message if needed */
DECLARE @iInsTrigID int /* Insert Trigger ID */
DECLARE @iUpdTrigID int /* Update Trigger ID */
DECLARE @iDelTrigID int /* Delete Trigger ID */
DECLARE @vcErrMsg varchar(255) /* Error Message */

/* Local variables to facilitate descending the parent-child
** object hierarchy.
*/
DECLARE @iCurrent int /* Current node in the tree */
DECLARE @iRoot int /* The root node in the tree */
DECLARE @iLevel int /* The current level */

/* Local variables that contain the fragments of the text to
** be displayed while descending the hierarchy.
*/
DECLARE @iDotIndex int /* Index for locating periods */
DECLARE @cConnector char(3) /* '+--' */
DECLARE @cSibSpacer char(3) /* '| ' */
DECLARE @cBar char(1) /* '|' */
DECLARE @cSpacer char(3) /* ' ' */
DECLARE @cPrntStrng1 char(255) /* The first string to print */
DECLARE @cPrntStrng2 char(255) /* The second string to print */
DECLARE @iLoop int /* Temp var used for loop */
DECLARE @vcDepends varchar(255) /* Dependency String */
DECLARE @iDependsItem int /* Index to a string item */

/* Create a temporary table to handle the hierarchical
** decomposition of the task parent-child relationship. The Stack
** table keeps track of where we are while the leaf table keeps
** track of the leaf tasks which need to be performed.
*/
CREATE TABLE #Stack
(iItem int,
iLevel int)

/*------------------- Validate Input Parameters --------------------*/
/* Make sure the table is local to the current database. */
IF (@vcObjectName LIKE "%.%.%") AND (SUBSTRING(@vcObjectName, 1,
CHARINDEX(".", @vcObjectName) - 1) != DB_NAME())
GOTO ErrorNotLocal

/* Now check to see that the object is in sysobjects. */
IF OBJECT_ID(@vcObjectName) IS NULL
GOTO ErrorNotFound

/* ---------------------- Initialization -------------------------*/

/* Do print any rowcounts while this is in progress. */
SET NOCOUNT ON

/* Retrieve the object ID out of sysobjects */
SELECT @iObjectID = O.id,
@cObjectType = O.type
FROM sysobjects O
WHERE O.name = @vcObjectName

/* Make sure a job exists. */
IF NOT (@@rowcount = 1 and @@error = 0 and @iObjectID > 0)
GOTO ErrorNotFound

/* Initialize the print string pieces. */
SELECT @cConnector = "+--",
@cSibSpacer = "|..",
@cBar = "|",
@cSpacer = "...",
@cPrntStrng1 = "",
@cPrntStrng2 = ""

/* Print a separator line. */
PRINT " "
PRINT "** Utility by David Pledger, Strategic Data Systems, Inc. **"
PRINT "** PO Box 498, Springboro, OH 45066 **"
PRINT " "
PRINT " SCOPE OF EFFECT FOR OBJECT: %1!",@vcObjectName
PRINT "+------------------------------------------------------------------+"

/* -------------------- Show the Hierarchy -----------------------*/
/* Find the root task for this job. The root task is the only task
** that has a parent task ID of null.
*/
SELECT @iRoot = @iObjectID

/* Since there is a root task, we can assign the first
** stack value and assign it a level of one.
*/
SELECT @iCurrent = @iRoot,
@iLevel = 1

/* Prime the stack with the root level. */
INSERT INTO #Stack values (@iCurrent, 1)

/* As long as there are nodes which have not been visited
** within the tree, the level will be > 0. Continue until all
** nodes are visited. This outer loop descends the tree through
** the parent-child relationship of the nodes.
*/
WHILE (@iLevel > 0)
BEGIN

/* Do any nodes exist at the current level? If yes, process them.
** If no, then back out to the previous level.
*/
IF EXISTS
(SELECT *
FROM #Stack S
WHERE S.iLevel = @iLevel)
BEGIN

/* Get the smallest numbered node at the current level. */
SELECT @iCurrent = min(S.iItem)
FROM #Stack S
WHERE S.iLevel = @iLevel

/* Get the name and type of this node. */
SELECT @cObjectType = O.type,
@vcName = O.name,
@iInsTrigID = ISNULL(O.instrig, 0),
@iUpdTrigID = ISNULL(O.updtrig, 0),
@iDelTrigID = ISNULL(O.deltrig, 0)
FROM sysobjects O
WHERE O.id = @iCurrent

/*
* *=================================================* *
* * Print out data for this node. (Consider * *
* * making this a separate procedure.) * *
* *=================================================* *
*/

/* Initialize the print strings to empty (different from NULL).
** @cPrntStrng1 is used to 'double space' the output and
** contains the necessary column connectors, but no data.
** @cPrntStrng2 contains the actual data at the end of the
** string.
*/
SELECT @cPrntStrng1 = ""
SELECT @cPrntStrng2 = ""

/* Level 1 is the root node level. All Jobs have a single
** root task. All other tasks are subordinate to this task.
** No job may have more than one root task.
*/
IF @iLevel = 1
BEGIN
/* Print data for the root node. */
SELECT @cPrntStrng1 = "",
@cPrntStrng2 = "(" + @cObjectType + ") " + @vcName
END
ELSE /* Else part of (IF @iLevel = 1) */
BEGIN

/* Initialize loop variable to 2 since level one has
** already been processed for printing.
*/
SELECT @iLoop = 2

/* Look at the values on the stack at each level to
** determine which symbol should be inserted into the
** print string.
*/
WHILE @iLoop <= @iLevel
BEGIN

/* While the loop variable is less than the current
** level, add the appropriate spacer to line up
** the printed output.
*/
IF @iLoop < @iLevel
BEGIN

/* Is there a sibling (another node which exists
** at the same level) on the stack? If so, use
** one type of separator; otherwise, use another
** type of separator.
*/
IF EXISTS(SELECT * FROM #Stack WHERE iLevel = @iLoop)
BEGIN
SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) +
@cSibSpacer
SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) +
@cSibSpacer
END
ELSE
BEGIN
SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) + @cSpacer
SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) + @cSpacer
END
END
ELSE /* Else part of (IF @iLoop < @iLevel) */
BEGIN
SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) + @cBar
SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) +
@cConnector + "(" + @cObjectType + ") " +
@vcName
END

/* Increment the loop variable */
SELECT @iLoop = @iLoop + 1

END /* While @iLoop <= @iLevel */
END /* IF @iLevel = 1 */

/* Spaces are inserted into the string to separate the levels
** into columns in the printed output. Spaces, however, caused
** a number of problems when attempting to concatenate the
** two strings together. To perform the concatenation, the
** function rtrim was used to remove the end of the string.
** This also removed the spaces we just added. To aleviate
** this problem, we used a period (.) wherever there was
** supposed to be a space. Now that we are ready to print
** the line of text, we need to substitute real spaces
** wherever there is a period in the string. To do this,
** we simply look for periods and substitute spaces. This
** has to be done in a loop since there is no mechanism to
** make this substitution in the whole string at once.
*/

/* Find the first period. */
SELECT @iDotIndex = charindex (".", @cPrntStrng1)

/* If a period exists, substitute a space for it and then
** find the next period.
*/
WHILE @iDotIndex > 0
BEGIN
/* Substitute the space */
SELECT @cPrntStrng1 = stuff(@cPrntStrng1, @iDotIndex, 1, " ")

/* Find the next. */
SELECT @iDotIndex = charindex (".", @cPrntStrng1)
END

/* Do the same thing for the second print string. */
SELECT @iDotIndex = charindex (".", @cPrntStrng2)
WHILE @iDotIndex > 0
BEGIN
SELECT @cPrntStrng2 = stuff(@cPrntStrng2, @iDotIndex, 1, " ")
SELECT @iDotIndex = charindex (".", @cPrntStrng2)
END

SELECT @vcDepends = NULL

IF @iInsTrigID > 0
SELECT @vcDepends = OBJECT_NAME(@iInsTrigID) + " (Insert)"

IF @iUpdTrigID > 0
IF @vcDepends IS NULL
SELECT @vcDepends = OBJECT_NAME(@iUpdTrigID) + " (Update)"
ELSE
SELECT @vcDepends = @vcDepends + ", " +
OBJECT_NAME(@iUpdTrigID) + " (Update)"

IF @iDelTrigID > 0
IF @vcDepends IS NULL
SELECT @vcDepends = OBJECT_NAME(@iDelTrigID) + " (Delete)"
ELSE
SELECT @vcDepends = @vcDepends + ", " +
OBJECT_NAME(@iDelTrigID) + " (Delete)"

IF @vcDepends IS NOT NULL
IF @cObjectType = "T"
SELECT @cPrntStrng2 = @cPrntStrng2 +
" (Trigger on table '" + @vcDepends + "')"
ELSE
SELECT @cPrntStrng2 = @cPrntStrng2 +
" (See Triggers: " + @vcDepends + ")"

/* Remove trailing blanks from the first print string. */
SELECT @cPrntStrng1 = rtrim(@cPrntStrng1)
SELECT @cPrntStrng2 = rtrim(@cPrntStrng2)

/* Print the two strings. */
PRINT @cPrntStrng1
PRINT @cPrntStrng2

/* Remove the current entry from the stack (Pop) */
DELETE #Stack
WHERE #Stack.iLevel = @iLevel
AND #Stack.iItem = @iCurrent

/* Add (push) to the stack all the children of the current
** node.
*/
INSERT INTO #Stack
SELECT D.depid,
@iLevel + 1
FROM sysdepends D
WHERE D.id = @iCurrent

/* If any were added, then we must descend another level. */
IF @@rowcount > 0
BEGIN
SELECT @iLevel = @iLevel + 1
END

END
ELSE
BEGIN
/* We have reached a leaf node. Move back to the previous
** level and see what else is left to process.
*/
SELECT @iLevel = @iLevel - 1
END

END /* While (@iLevel > 0) */

PRINT " "

RETURN (0)

/*------------------------ Error Handling --------------------------*/
ErrorNotLocal:
/* 17460, Table must be in the current database. */
EXEC sp_getmessage 17460, @vcErrMsg OUT
PRINT @vcErrMsg
RETURN (1)

ErrorNotFound:
/* 17461, Table is not in this database. */
EXEC sp_getmessage 17461, @vcErrMsg OUT
PRINT @vcErrMsg
PRINT " "

PRINT "Local object types and objecs are:"

SELECT "Object Type" = type,
"Object Name" = name
FROM sysobjects
WHERE type IN ("U","TR","P","V")
ORDER BY type, name

RETURN (1)

END
go

grant execute on sp_dos to public
go

/*
* If sybsystemprocs exists, we wish to use it. If it fails, then we
* should be left in either master or the users defaultdb, both of which
* are probably what we want.
*/

use sybsystemprocs
go

/* Procedure sp_freedevice, owner dbo */
IF OBJECT_ID('sp_freedevice') IS NOT NULL
BEGIN

setuser 'dbo'

DROP PROCEDURE sp_freedevice
IF OBJECT_ID('sp_freedevice') IS NOT NULL
PRINT '<<< FAILED TO DROP PROCEDURE sp_freedevice >>>'
ELSE
PRINT '<<< DROPPED PROCEDURE sp_freedevice >>>'
END
go

setuser 'dbo'
go

/*
* Name: sp_freedevice
* Version: 1.1
* Author: Unknown (if you know who it is/was let me know and I will modify this).
* Description: Prints the current disk usage in a nice table for all of the devices on the system.
* Part of the FAQ ASE code package. Latest version available from URL below.
* Source: http://www.isug.com/Sybase_FAQ/ASE/section9.html
* Maintainer: David Owen (do...@midsomer.org)
*/

create proc sp_freedevice
@devname char(30) = null
as

declare @showdev bit
declare @alloc int

if @devname = null
select @devname = '%'
,@showdev = 0
else
select @showdev = 1

select @alloc = low
from master.dbo.spt_values
where type = 'E'
and number = 1

create table #freedev
(
name char(30)
,size numeric(14,2)
,used numeric(14,2)
)

insert #freedev
select dev.name
,((dev.high - dev.low) * @alloc + 500000) / 1048576
,convert(numeric(14,2), sum((usg.size * @alloc + 500000) / 1048576))
from master.dbo.sysdevices dev
,master.dbo.sysusages usg
where dev.low <= usg.size + usg.vstart - 1
and dev.high >= usg.size + usg.vstart - 1
and dev.cntrltype = 0
group by dev.name

insert #freedev
select name
,convert(numeric(14,2), ((sd.high - sd.low) * @alloc + 500000) / 1048576)
,0
from master.dbo.sysdevices sd
where sd.cntrltype = 0
and not exists (select 1
from #freedev fd
where fd.name = sd.name)

if @showdev = 1
begin
select devname = dev.name
,size = right(replicate(' ', 21) + convert(varchar(18),f.size) + ' MB', 21)
,used = right(replicate(' ', 21) + convert(varchar(18),f.used) + ' MB', 21)
,free = right(replicate(' ', 21) + convert(varchar(18),f.size - f.used) + ' MB', 21)
from master.dbo.sysdevices dev
,#freedev f
where dev.name = f.name
and dev.name like @devname

select dbase = db.name
,size = right(replicate(' ', 21) + convert(varchar(18),
(usg.size * @alloc + 500000) / 1048576
) + ' MB', 21)
,usage = vl.name
from master.dbo.sysdatabases db
,master.dbo.sysusages usg
,master.dbo.sysdevices dev
,master.dbo.spt_values vl
where db.dbid = usg.dbid
and usg.segmap = vl.number
and dev.low <= usg.size + usg.vstart - 1
and dev.high >= usg.size + usg.vstart - 1
and dev.status & 2 = 2
and vl.type = 'S'
and dev.name = @devname
end
else
begin

select total = right(replicate(' ', 21) + convert(varchar(18), sum(size)) + ' MB', 21)
,used = right(replicate(' ', 21) + convert(varchar(18), sum(used)) + ' MB', 21)
,free = right(replicate(' ', 21) + convert(varchar(18), sum(size) - sum(used)) + ' MB', 21)
from #freedev

select devname = dev.name
,size = right(replicate(' ', 21) + convert(varchar(18), f.size) + ' MB', 21)
,used = right(replicate(' ', 21) + convert(varchar(18), f.used) + ' MB', 21)
,free = right(replicate(' ', 21) + convert(varchar(18), f.size - f.used) + ' MB', 21)
from master.dbo.sysdevices dev
,#freedev f
where dev.name = f.name
end
go

IF OBJECT_ID('sp_freedevice') IS NOT NULL
PRINT '<<< CREATED PROCEDURE sp_freedevice >>>'
ELSE
PRINT '<<< FAILED TO CREATE PROCEDURE sp_freedevice >>>'
go

IF OBJECT_ID('sp_freedevice') IS NOT NULL
BEGIN
GRANT EXECUTE ON sp_freedevice TO public
END
go
use sybsystemprocs
go

if object_id('sp_helpoptions') is not null
begin
drop procedure sp_helpoptions
if object_id('sp_helpoptions') is not null
print '<<< Failed to drop procedure sp_helpoptions >>>'
else
print '<<< Dropped procedure sp_helpoptions >>>'
end
go

create procedure sp_helpoptions as

-- initial design by Bret Halford (br...@sybase.com) 10 Jan 2000
-- with assistance from Kimberly Russell
-- relies only on @@options, developed on ASE 11.5.x Solaris

-- This stored procedure displays a list of SET options and indicates
-- for each option if the option is ON or OFF

-- The @@options global variable contains bits that indicate
-- whether certain of the SET command options are on or not.

-- By observing the difference (if any) in @@options value when an
-- option is on and off, a test can be derived for that condition

-- Note that @@options is not documented in the manuals and its details
-- are possibly subject to change without notice and may vary by platform.

-- This procedure can probably be expanded to test for other SET command
-- options as well. If you come up with a test for any other SET option,
-- please send it to me and I will add it to the procedure.

declare @high_bits int
declare @low_bits int
select @high_bits = convert(int,substring(@@options,1,4))
select @low_bits = convert(int,substring(@@options,5,4))

if (@high_bits & 268435456 = 268435456 ) print "showplan is on"
else print "showplan is off"

if (@low_bits & 33554432 = 33554432) print "ansinull is on"
else print "ansinull is off"

if (@low_bits & 536870912 = 536870912) print "ansi_permissions is on"
else print "ansi_permissions is off"

if (@high_bits & -2147418112 = -2147418112) print "arithabort is on"
else print "arithabort is off"

if (@high_bits & 1073741824 = 1073741824) print "arithignore is on"
else print "arithignore is off"

if (@high_bits & 1073741824 = 1073741824) print "arithignore arith_overflow"
else print "arithignore arith_overflow off"

if (@high_bits & 32 = 32) print "close on endtran is on"
else print "close on endtran is off"

if (@high_bits & 32768 = 32768) print "nocount is on"
else print "nocount is off"

-- Note: if 'noexec' or 'parseonly' were on, this procedure could not run,
-- so no test is necessary.
print 'noexec is off'
print 'parseonly is off.'

go

if object_id('sp_helpoptions') is not null
begin
print '<<< Created procedure sp_helpoptions >>>'
grant execute on sp_helpoptions to public
end
else
print '<<< Failed to create procedure sp_helpoptions >>>'
go

use sybsystemprocs
go

drop procedure sp_lockconfig
go

-- sp_lockconfig, 'Lists data for lock promotions and index locking schemes'
-- sp_lockconfig, ' if SYS_FLAG is non-null include system tables'


create procedure sp_lockconfig (@SYS_FLAG char (1) = NULL) as
set ansinull on
set flushmessage on
set nocount on
set string_rtruncation on

print ' '

if (@@trancount = 0)
begin
set chained off

if (@@isolation > 1)
begin
set transaction isolation level 1
end
end
else
begin
print ' sp_lockconfig CANNOT BE RUN FROM WITHIN A TRANSACTION.'

print ' '

return 1
end

declare @allcount varchar (7),
@dpcount varchar (7),
@drcount varchar (7),
@sysval smallint,
@tabtext varchar (12)

create table #lockcfg
(sort tinyint not null,
type char (8) not null,
name varchar (30) not null,
levelx varchar ( 5) not null,
txt varchar (33) not null)

insert into #lockcfg
select 1,
'Table',
object_name (object),
'page',
substring (char_value, 1, 33)
from sysattributes
where class = 5
and attribute = 0
and object_type = 'T'

insert into #lockcfg
select 1,
'Table',
object_name (object),
'row',
substring (char_value, 1, 33)
from sysattributes
where class = 5
and attribute = 1
and object_type = 'T'

insert into #lockcfg
select 2,
'Database',
db_name (),
'page',
substring (char_value, 1, 33)
from master.dbo.sysattributes
where class = 5
and attribute = 0
and object_type = 'D'
and object = db_id ()

insert into #lockcfg
select 2,
'Database',
db_name (),
'row',
substring (char_value, 1, 33)
from master.dbo.sysattributes
where class = 5
and attribute = 1
and object_type = 'D'
and object = db_id ()

insert into #lockcfg
select 3,
'Server',
'default lock scheme',
'-',
substring (c.value2, 1, 10)
from master.dbo.sysconfigures f,
master.dbo.syscurconfigs c
where f.name = 'lock scheme'
and f.parent <> 19
and f.config <> 19
and c.config = f.config

insert into #lockcfg
select 3,
'Server',
'-',
'page',
'PCT = '
+ convert (varchar (11), pc.value)
+ ', LWM = '
+ convert (varchar (11), lc.value)
+ ', HWM = '
+ convert (varchar (11), hc.value)
from master.dbo.sysconfigures pf,
master.dbo.sysconfigures lf,
master.dbo.sysconfigures hf,
master.dbo.syscurconfigs pc,
master.dbo.syscurconfigs lc,
master.dbo.syscurconfigs hc
where pf.config = pc.config
and pf.name = 'page lock promotion PCT'
and pf.parent <> 19
and pf.config <> 19
and lf.config = lc.config
and lf.name = 'page lock promotion LWM'
and lf.parent <> 19
and lf.config <> 19
and hf.config = hc.config
and hf.name = 'page lock promotion HWM'
and hf.parent <> 19
and hf.config <> 19

insert into #lockcfg
select 3,
'Server',
'-',
'row',
'PCT = '
+ convert (varchar (11), pc.value)
+ ', LWM = '
+ convert (varchar (11), lc.value)
+ ', HWM = '
+ convert (varchar (11), hc.value)
from master.dbo.sysconfigures pf,
master.dbo.sysconfigures lf,
master.dbo.sysconfigures hf,
master.dbo.syscurconfigs pc,
master.dbo.syscurconfigs lc,
master.dbo.syscurconfigs hc
where pf.config = pc.config
and pf.name = 'row lock promotion PCT'
and pf.parent <> 19
and pf.config <> 19
and lf.config = lc.config
and lf.name = 'row lock promotion LWM'
and lf.parent <> 19
and lf.config <> 19
and hf.config = hc.config
and hf.name = 'row lock promotion HWM'
and hf.parent <> 19
and hf.config <> 19

select TYPE = type,
OBJECT = substring (name, 1, 28),
'LEVEL' = levelx,
'LOCK DATA' = txt
from #lockcfg
order by sort, name, levelx

print ' '

if (@SYS_FLAG IS NULL)
begin
select @sysval = 3,
@tabtext = 'USER'
end
else
begin
select @sysval = 1,
@tabtext = 'USER/SYSTEM'
end

select @allcount = ltrim (substring (convert (char (10),
convert (money,
count (*)),
1),
1,
7))
from sysobjects
where (sysstat & 15) in (@sysval, 3)
and (sysstat2 & 8192) = 8192

select @dpcount = ltrim (substring (convert (char (10),
convert (money,
count (*)),
1),
1,
7))
from sysobjects
where (sysstat & 15) in (@sysval, 3)
and (sysstat2 & 16384) = 16384

select @drcount = ltrim (substring (convert (char (10),
convert (money,
count (*)),
1),
1,
7))
from sysobjects
where (sysstat & 15) in (@sysval, 3)
and (sysstat2 & 32768) = 32768

if ((@allcount <> '0') and (@dpcount = '0') and (@drcount = '0'))
begin
print ' ALL %1! TABLES USE ALLPAGES LOCKING.', @tabtext
end
else if ((@allcount = '0') and (@dpcount <> '0') and (@drcount = '0'))
begin
print ' ALL %1! TABLES USE DATAPAGES LOCKING.', @tabtext
end
else if ((@allcount = '0') and (@dpcount = '0') and (@drcount <> '0'))
begin
print ' ALL %1! TABLES USE DATAROWS LOCKING.', @tabtext
end
else
begin
if (@allcount = '0')
begin
print ' THERE ARE NO %1! TABLES WITH ALLPAGES LOCKING.', @tabtext
end
else
begin
print ' THERE ARE %1! %2! TABLES WITH ALLPAGES LOCKING.',
@allcount, @tabtext

print ' '

select 'TABLE' = name,
OWNER = user_name (uid)
from sysobjects
where (sysstat & 15) in (@sysval, 3)
and (sysstat2 & 8192) = 8192
order by 'TABLE', OWNER
end

print ' '

if (@dpcount = '0')
begin
print ' THERE ARE NO %1! TABLES WITH DATAPAGES LOCKING.',
@tabtext
end
else
begin
print ' THERE ARE %1! %2! TABLES WITH DATAPAGES LOCKING.',
@dpcount, @tabtext

print ' '

select 'TABLE' = space (30),
OWNER = space (30)
where 1 = 2
union
select substring (name + ' *',
1,
30),
user_name (uid)
from sysobjects
where (sysstat & 15) in (@sysval, 3)
and (sysstat2 & 16384) = 16384
and (sysstat2 & 131072) = 131072
union
select name,
user_name (uid)
from sysobjects
where (sysstat & 15) in (@sysval, 3)
and (sysstat2 & 16384) = 16384
and (sysstat2 & 131072) <> 131072
order by 'TABLE', OWNER
end

print ' '

if (@drcount = '0')
begin
print ' THERE ARE NO %1! TABLES WITH DATAROWS LOCKING.',
@tabtext
end
else
begin
print ' THERE ARE %1! %2! TABLES WITH DATAROWS LOCKING.',
@drcount, @tabtext

print ' '

select 'TABLE' = space (30),
OWNER = space (30)
where 1 = 2
union
select substring (name + ' *',
1,
30),
user_name (uid)
from sysobjects
where (sysstat & 15) in (@sysval, 3)
and (sysstat2 & 32768) = 32768
and (sysstat2 & 131072) = 131072
union
select name,
user_name (uid)
from sysobjects
where (sysstat & 15) in (@sysval, 3)
and (sysstat2 & 32768) = 32768
and (sysstat2 & 131072) <> 131072
order by 'TABLE', OWNER
end
end

print ' '
go
sp_procxmode sp_lockconfig, anymode
go
use sybsystemprocs
go
/*
* DROP PROC dbo.sp_servermap
*/
IF OBJECT_ID('dbo.sp_servermap') IS NOT NULL
BEGIN
DROP PROC dbo.sp_servermap
PRINT '<<< DROPPED PROC dbo.sp_servermap >>>'
END
go

create proc sp_servermap (@selection varchar(10) = "ABCDEF")
as

/* produces 6 "reports" against all possible data in
master..sysdatabases
master..sysdevices
master..sysusages

sp_servermap help
produces a list of the six reports.
A subset of the complete set of reports can be requested by passing
an argument that consists of a string containing the letters of the
desired report.

This procedure was developed on 4.9.1 server. It will run on 4.8
and 10.0 servers, but it has not been verified that the results
produced are correct.
*/

declare @atitle varchar(40),
@btitle varchar(40),
@ctitle varchar(40),
@dtitle varchar(40),
@etitle varchar(40),
@ftitle varchar(40),
@stars varchar(40),
@xstars varchar(40)

set nocount on

select @atitle = "A - DATABASE SEGMENT MAP",
@btitle = "B - DATABASE INFORMATION",
@ctitle = "C - DEVICE ALLOCATION MAP",
@dtitle = "D - DEVICE NUMBER, DEFAULT & SPACE USAGE",
@etitle = "E - DEVICE LOCATION",
@ftitle = "F - MIRRORED DEVICES",
@selection = upper(@selection),
@stars = replicate("*",40)

if @selection = "HELP" begin
print @atitle
print @btitle
print @ctitle
print @dtitle
print @etitle
print @ftitle
print ""
print "select any combination of reports by entering a string of"
print "report letters as the argument to sp_servermap:"
print " sp_servermap acd"
print "will select reports A,C and D."
print "calling sp_servermap with no argument will produce all reports"
return
end

select @@servername, "Current Date/Time" = getdate()
select "Version" = @@version

if charindex("A",@selection) > 0
begin
print ""
print @atitle
select @xstars = substring(@stars,1,datalength(@atitle))
print @xstars

select db=substring(db.name,1,15),db.dbid,
usg.segmap,
segs = substring(" U",sign(usg.segmap/8)+1,1) +
substring(" L",(usg.segmap & 4)/4+1,1) +
substring(" D",(usg.segmap & 2)/2+1,1) +
substring(" S",(usg.segmap & 1)+1,1),
"device fragment"=substring(dev.name,1,15),
"start (pg)" = usg.vstart,"size (MB)" = str(usg.size/512.,7,2)
from master.dbo.sysusages usg,
master.dbo.sysdevices dev,
master.dbo.sysdatabases db
where vstart between low and high
and cntrltype = 0
and db.dbid = usg.dbid
order by db.dbid, usg.lstart

print ""
print"Segment Codes:"
print "U=User-defined segment on this device fragment"
print "L=Database Log may be placed on this device fragment"
print "D=Database objects may be placed on this device fragment by DEFAULT"
print "S=SYSTEM objects may be placed on this device fragment"
print ""
end

if charindex("B",@selection) > 0
begin
print ""
print @btitle
select @xstars = substring(@stars,1,datalength(@btitle))
print @xstars

select db=substring(db.name,1,15),
db.dbid,
"size (MB)" = str(sum(usg.size)/512.,7,2),
"db status codes " = substring(" A",(status & 4)/4+1,1) +
substring(" B",(status & 8)/8+1,1) +
substring(" C",(status & 16)/16+1,1) +
substring(" D",(status & 32)/32+1,1) +
substring(" E",(status & 256)/256+1,1) +
substring(" F",(status & 512)/512+1,1) +
substring(" G",(status & 1024)/1024+1,1) +
substring(" H",(status & 2048)/2048+1,1) +
substring(" I",(status & 4096)/4096+1,1) +
substring(" J",(status & 16384)/16384+1,1) +
substring(" K",(status & 64)/64+1,1) +
substring(" L",(status & 128)/128+1,1) +
substring(" M",(status2 & 1)/1+1,1) +
substring(" N",(status2 & 2)/2+1,1) +
substring(" O",(status2 & 4)/4+1,1) +
substring(" P",(status2 & 8)/8+1,1) +
substring(" Q",(status2 & 16)/16+1,1) +
substring(" R",(status2 & 32)/32+1,1),
"created" = convert(char(9),crdate,6) + " " +
convert(char(5),crdate,8),
"dump tran" = convert(char(9),dumptrdate,6) + " " +
convert(char(5),dumptrdate,8)
from master.dbo.sysdatabases db,
master.dbo.sysusages usg
where db.dbid =usg.dbid
group by db.dbid
order by db.dbid

print ""
print "Status Code Key"
print ""
print "Code Status"
print "---- ----------------------------------"
print " A select into/bulk copy allowed"
print " B truncate log on checkpoint"
print " C no checkpoint on recovery"
print " D db in load-from-dump mode"
print " E db is suspect"
print " F ddl in tran"
print " G db is read-only"
print " H db is for dbo use only"
print " I db in single-user mode"
print " J db name has been changed"
print " K db is in recovery"
print " L db has bypass recovery set"
print " M abort tran on log full"
print " N no free space accounting"
print " O auto identity"
print " P identity in nonunique index"
print " Q db is offline"
print " R db is offline until recovery completes"
print ""
end

if charindex("C",@selection) > 0
begin
print ""
print @ctitle
select @xstars = substring(@stars,1,datalength(@ctitle))
print @xstars

select "device fragment"=substring(dev.name,1,15),
"start (pg)" = usg.vstart,"size (MB)" = str(usg.size/512.,7,2),
db=substring(db.name,1,15),
lstart,
segs = substring(" U",sign(usg.segmap/8)+1,1) +
substring(" L",(usg.segmap & 4)/4+1,1) +
substring(" D",(usg.segmap & 2)/2+1,1) +
substring(" S",(usg.segmap & 1)+1,1)
from master.dbo.sysusages usg,
master.dbo.sysdevices dev,
master.dbo.sysdatabases db
where usg.vstart between dev.low and dev.high
and dev.cntrltype = 0
and db.dbid = usg.dbid
group by dev.name, usg.vstart, db.name
having db.dbid = usg.dbid
order by dev.name, usg.vstart


print ""
print "Segment Codes:"
print "U=USER-definedsegment on this device fragment"
print "L=Database LOG may be placed on this device fragment"
print "D=Database objects may be placed on this device fragment by DEFAULT"
print "S=SYSTEM objects may be placed on this device fragment"
print ""
end

if charindex("D",@selection) > 0
begin
print ""
print @dtitle
select @xstars = substring(@stars,1,datalength(@dtitle))
print @xstars

declare @vsize int
select @vsize = low
from master.dbo.spt_values
where type="E"
and number = 3

select device = substring(name,1,15),
vdevno = convert(tinyint,substring(convert(binary(4),low),@vsize,1)),
"default disk?" = " " + substring("NY",(status & 1)+1,1),
"total (MB)" = str(round((high-low)/512.,2),7,2),
used = str(round(isnull(sum(size),0)/512.,2),7,2),
free = str(round(abs((high-low-isnull(sum(size),0))/512.),2),7,2)
from master.dbo.sysusages,
master.dbo.sysdevices
where vstart between low and high
and cntrltype=0
group by all name
having cntrltype=0
order by vdevno
end

if charindex("E",@selection) > 0
begin
print ""
print @etitle
select @xstars = substring(@stars,1,datalength(@etitle))
print @xstars

select device = substring(name,1,15),
location = substring(phyname,1,60)
from master.dbo.sysdevices
where cntrltype=0
end

if charindex("F",@selection) > 0
begin
if exists (select 1
from master.dbo.sysdevices
where status & 64 = 64)
begin

print ""
print @ftitle
select @xstars = substring(@stars,1,datalength(@ftitle))
print @xstars

select device = substring(name,1,15),
pri =" " + substring("* **",(status/256)+1,1),
sec = " " + substring(" ***",(status/256)+1,1),
serial = " " + substring(" *",(status & 32)/32+1,1),
"mirror" = substring(mirrorname,1,35),
reads = " " + substring(" *",(status & 128)/128+1,1)
from master.dbo.sysdevices
where cntrltype=0
and status & 64 = 64
end
else
begin
print ""
print "NO DEVICES ARE MIRRORED"
end
end

set nocount off


go
IF OBJECT_ID('dbo.sp_servermap') IS NOT NULL
BEGIN
PRINT '<<< CREATED PROC dbo.sp_servermap >>>'
grant execute on dbo.sp_servermap to sa_role
END
ELSE
PRINT '<<< FAILED CREATING PROC dbo.sp_servermap >>>'
gouse sybsystemprocs
go

IF OBJECT_ID('dbo.sp_spaceused_table') IS NOT NULL
BEGIN
DROP PROCEDURE dbo.sp_spaceused_table
IF OBJECT_ID('dbo.sp_spaceused_table') IS NOT NULL
PRINT '<<< FAILED TO DROP dbo.sp_spaceused_table >>>'
ELSE
PRINT '<<< DROPPED PROC dbo.sp_spaceused_table >>>'
END
go

create procedure sp_spaceused_table
@list_indices int = 0
as
declare @type smallint, -- the object type
@msg varchar(250), -- message output
@dbname varchar(30), -- database name
@tabname varchar(30), -- table name
@length int,
@object_id int

set nocount on

if @@trancount = 0
begin
set chained off
end

set transaction isolation level 1

create table #pagecounts
(
name varchar(45) null,
iname varchar(45) null,
low int null,
rowtotal int null,
reserved numeric(20,9) null,
data numeric(20,9) null,
index_size numeric(20,9) null,
unused numeric(20,9) null
)

select @object_id = min(id)
from sysobjects
where type = 'U'
and name not like "%pagecount%"

while (@object_id is not null)
begin
/*
** We want a particular object.
*/
insert #pagecounts
select name = o.name,
iname = i.name,
low = d.low,
rowtotal = rowcnt(i.doampg),
reserved = convert(numeric(20,9),
(reserved_pgs(i.id, i.doampg) +
reserved_pgs(i.id, i.ioampg))),
data = convert(numeric(20,9), data_pgs(i.id, i.doampg)),
index_size = convert(numeric(20,9), data_pgs(i.id, i.ioampg)),
unused = convert(numeric(20,9),
((reserved_pgs(i.id, i.doampg) +
reserved_pgs(i.id, i.ioampg)) -
(data_pgs(i.id, i.doampg) +
data_pgs(i.id, i.ioampg))))
from sysobjects o
,sysindexes i
,master.dbo.spt_values d
where i.id = @object_id
and o.id = @object_id
and i.id = o.id
and d.number = 1
and d.type = 'E'

select @object_id = min(id)
from sysobjects
where type = 'U'
and id > @object_id
and name not like "%pagecount%"

end

select @length = max(datalength(iname))
from #pagecounts

if (@list_indices = 1)
begin

if (@length > 20)
begin
select index_name = iname,
size = convert(char(10), convert(varchar(11),
convert(numeric(11,0),
index_size / 1024 *
low)) + ' KB'),
reserved = convert(char(10),
convert(varchar(11),
convert(numeric(11,0),
reserved / 1024 *
low)) + ' KB'),
unused = convert(char(10), convert(varchar(11),
convert(numeric(11,0), unused / 1024 *
low)) + ' KB')
from #pagecounts

end
else
begin
select index_name = convert(char(20), iname),
size = convert(char(10), convert(varchar(11),
convert(numeric(11,0),
index_size / 1024 *
low)) + ' KB'),
reserved = convert(char(10),
convert(varchar(11),
convert(numeric(11,0),
reserved / 1024 *
low)) + ' KB'),
unused = convert(char(10), convert(varchar(11),
convert(numeric(11,0), unused / 1024 *
low)) + ' KB')
from #pagecounts
end
end

if (@length > 20)
begin
select distinct name,
rowtotal = convert(char(11), sum(rowtotal)),
reserved = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(reserved) *
(low / 1024))) + ' KB'),
data = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(data) * (low / 1024)))
+ ' KB'),
index_size = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(index_size) *
(low / 1024))) + ' KB'),
unused = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(unused) *
(low / 1024))) + ' KB')
from #pagecounts
group by name
end
else
begin
select distinct name = convert(char(20), name),
rowtotal = convert(char(11), sum(rowtotal)),
reserved = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(reserved) *
(low / 1024))) + ' KB'),
data = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(data) * (low / 1024)))
+ ' KB'),
index_size = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(index_size) *
(low / 1024))) + ' KB'),
unused = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(unused) *
(low / 1024))) + ' KB')
from #pagecounts
group by name
end

return (0)
go

IF OBJECT_ID('dbo.sp_spaceused_table') IS NOT NULL
PRINT '<<< CREATED PROC dbo.sp_spaceused_table >>>'
ELSE
PRINT '<<< FAILED TO CREATE PROC dbo.sp_spaceused_table >>>'
go
use sybsystemprocs
go

if object_id('sp_whodo') is not null
begin
drop procedure sp_whodo
if object_id('sp_whodo') is not null
print '<<< Failed to drop procedure sp_whodo >>>'
else
print '<<< Dropped procedure sp_whodo >>>'
end
go

create procedure sp_whodo @loginame varchar(30) = NULL
as

declare @low int
,@high int
,@spidlow int
,@spidhigh int

select @low = 0
,@high = 32767
,@spidlow = 0
,@spidhigh = 32767

if @loginame is not NULL
begin
select @low = suser_id(@loginame)
,@high = suser_id(@loginame)

if @low is NULL
begin
if @loginame like "[0-9]%"
begin
select @spidlow = convert(int, @loginame)
,@spidhigh = convert(int, @loginame)
,@low = 0
,@high = 32767
end
else
begin
print "Login %1! does not exist.", @loginame
return (1)
end
end
end

select spid
,status
,substring(suser_name(suid),1,12) loginame
,hostname
,convert(char(3), blocked) blk
,convert(char(7), isnull(time_blocked, 0)) blk_sec
,convert(char(16), program_name) program
,convert(char(7), db_name(dbid)) dbname
,convert(char(16), cmd) cmd
,convert(char(6), cpu) cpu
,convert(char(7), physical_io) io
,convert(char(16), isnull(tran_name, "")) tran_name
from master..sysprocesses
where suid >= @low
and suid <= @high
and spid>= @spidlow
and spid <= @spidhigh

return (0)

go

if object_id('sp_whodo') is not null
begin
print '<<< Created procedure sp_whodo >>>'
grant execute on sp_whodo to public
end
else
print '<<< Failed to create procedure sp_whodo >>>'
go

use master
go

if object_id('sp_whodo') is not null
begin
drop procedure sp_whodo
if object_id('sp_whodo') is not null
print '<<< Failed to drop procedure sp_whodo >>>'
else
print '<<< Dropped procedure sp_whodo >>>'
end
go

create procedure sp_whodo @loginame varchar(30) = NULL
as

declare @low int
,@high int
,@spidlow int
,@spidhigh int

select @low = 0
,@high = 32767
,@spidlow = 0
,@spidhigh = 32767

if @loginame is not NULL
begin

select @low = suser_id(@loginame)
,@high = suser_id(@loginame)

if @low is NULL
begin
if @loginame like "[0-9]%"
begin
select @spidlow = convert(int, @loginame)
,@spidhigh = convert(int, @loginame)
,@low = 0
,@high = 32767
end
else
begin
print "No login exists with the supplied name."
return (1)
end
end
end

select
spid
,status
,substring(suser_name(suid),1,12) loginame
,hostname
,convert(char(3), blocked) blk
,convert(char(16), program_name) program
,convert(char(7), db_name(dbid)) dbname
,convert(char(16), cmd) cmd
,convert(char(6), cpu) cpu
,convert(char(7), physical_io) io
from master..sysprocesses
where suid >= @low
and suid <= @high
and spid >= @spidlow
and spid <= @spidhigh

return (0)
go

if object_id('sp_whodo') is not null
begin
print '<<< Created procedure sp_whodo >>>'
grant execute on sp_whodo to public
else
print '<<< Failed to create procedure sp_whodo >>>'
end
goCreate procedure sp_whodoneit
as
Create table #usr_locks(
spid int, dbid smallint, id int)
Insert Into #usr_locks(spid,dbid,id)
Select distinct spid,dbid,id
From master..syslocks
Select
str(procs.spid,4) as "Spid",
substring(isnull(suser_name(procs.suid),"Sybase"),1,12) as "User",
hostname as "Host",
substring(cmd,1,6) as "Cmd",
convert(varchar(5),procs.cpu) as "Cpu",
convert(varchar(7),physical_io) as "I/O",
convert(varchar(3),blocked) as "Blk",
convert(varchar(10),db_name(ul.dbid)) as "DB Name",
ul.id as "Object Id",
getdate() as "Date"
From master..sysprocesses procs, #usr_locks ul
Where procs.spid *= ul.spid
#!/bin/csh -f

isql -U<dbusr> -P<dbpw> -S<dbsvr> -w265 $*
#!/bin/csh
# ########################################################################
# #
# # SCCS Keyword Header
# # -------------------
# #
# # Module Name : update_stats.csh
# # Version : 1.8
# # Last Modified: 2/16/98 at 17:19:38
# # Extracted : 2/16/98 at 17:19:39
# # Archived as : <host>:/u/sybase/SCCS/s.update_stats.csh
# #
# ########################################################################

# upd_stats.csh
# ------------------
#
# Shell to update the distribution pages for each table in a database.
#
# Requires sqlsa (script w/ the proper isql login for dbo of a database)
# ex:
# #!/bin/csh -f
# isql -U<dbusr> -P<dbpw> -S<dbsvr> -w265 $*
# exit($$status)
#
# Author: FJ Lundy, 2/96

ARGS:
set progname = `basename $0`
if ($#argv != 2) then
goto USAGE
endif
set dbdb = $1
set parallel_jobs = $2

INIT:
# Declare intermediate files
set filebase = /tmp/$progname:r.-D$dbdb
set cmdfile = $filebase.sql
set awkfile = $filebase.awk
set tblfile = $filebase.tbl
set workflag = $filebase.working
set logfile = $filebase.log
set runningflag = $filebase.running

# Check for another running copy of this process
if ( -f $runningflag ) goto ERROR

# Set the running flag to prevent multiple copies of
onintr DONE

# Clean up from previous runs
rm -f $filebase.* >& /dev/null

# Set the 'running flag' (this step must FOLLOW the 'clean-up from previous
# runs' step!
touch $runningflag

# Which OS are we running on?
set os = `uname`
switch ($os)
case 'IRIX':
case 'IRIX64':
case 'HP-UX':
set splitFlag = '-l'
breaksw
case 'Linux':
case 'SunOS':
set splitFlag = '-'
breaksw
default:
echo 'ERROR: $progname- Unsupported Os($os). Aborting'
exit(-1)
endsw

MAIN:
# Start the Log
rm -f $logfile
echo '$0 $*' > $logfile
echo 'NOTE: $progname- (`date`) BEGIN $progname' >> $logfile


# Create the awk command file.
cat << EOJ > $awkfile
\$0 !~ /^\$/ {
tblname = \$1
printf('declare @msg varchar(255), @dt_start datetime, @dt_end datetime\n')
printf('select @msg = \'Updating Statistics for: Db(%s)\'\n', '$dbdb')
printf('print @msg\n')
printf('select @dt_start = getdate()\n')
printf('update statistics %s\n', tblname)
printf('exec sp_recompile '%s'\n', tblname)
printf('select @dt_end = getdate()\n')
printf('select @msg = \'Table(%s)\'\n', tblname)
printf('print @msg\n')
printf('select @msg = \'\tstart(\' + convert(varchar, @dt_start) + \')\'\n')
printf('print @msg\n')
printf('select @msg = \'\t end(\' + convert(varchar, @dt_end) + \')\'\n')
printf('print @msg\n')
printf('print \'\'\n')
printf('go\n\n')
}
EOJ


# Create a list of tables to update the stats for
sqlsa << EOJ | tail +3 | sed 's/^[ ]*//g' | cut -f1 -d\ > $tblfile
set nocount on
use $dbdb
go
select u.name + '.' + o.name 'Table',
sum((reserved_pgs(i.id, i.doampg) + reserved_pgs(i.id, i.ioampg)) * 2) 'Kb'
from sysindexes i, sysobjects o, sysusers u
where (o.id = i.id) and (o.uid = u.uid) and (o.type = 'U' or o.type = 'S')
group by u.name, o.name
order by Kb desc
go
EOJ

exit(0)
# Split the files into equal-sized chunks based on the passed
# parameter for the number of parallelized jobs
@ ct = 0
foreach tbl (`cat $tblfile`)
@ i = $ct % $parallel_jobs
echo '$tbl' >> $tblfile.$i
@ ct = $ct + 1
end


# For each of the created table lists:
# 1) create TSQL, 2) set a work flag 3) background the job
@ i = 0
set all_work_flags = ''
foreach file ( $tblfile.* )
# Create the T-SQL command file
@ i = $i + 1
echo 'set nocount on' > $cmdfile.$i
echo 'use $dbdb' >> $cmdfile.$i
echo 'go' >> $cmdfile.$i
awk -f $awkfile $file >> $cmdfile.$i

# Spawn a subshell and remove the working flag when done
# Log output to a log file commonto all threads. This can possibly cause
# lost information in the log file if all the threads come crashing in
# at once. Oh well...
set all_work_flags = ( $all_work_flags $workflag.$i )
touch $workflag.$i
(sqlsa < $cmdfile.$i >>& $logfile ; rm -f $workflag.$i) &
end


# Loop until all of the spawned processes are finished (as indicated by the
# absence of working flags
while ( 1 )
set num_working = `ls $workflag.* | wc -l`
if ( $num_working == 0 ) break
sleep 10
end # end-while: wait for work to finish

DONE:
rm $awkfile $cmdfile.* $tblfile $tblfile.*
rm $runningflag
echo 'NOTE: $progname- (`date`) END $progname' >> $logfile
cat $logfile
exit(0)

USAGE:
echo ''
echo 'USAGE : $progname <db> <# of parallel jobs>'
echo ' Updates the distribution pages for each user and system table in'
echo ' the specified database.'
echo 'REQUIRES: sqlsa'
echo ''
exit(-1)

ERROR:
echo ''
echo 'ERROR: $progname- This process is already running for $dbdb. Aborting'
echo ''
exit(-2)

# EOJ

0 new messages