Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Sybase FAQ: 1/19 - index

252 views
Skip to first unread message

David Owen

unread,
Jan 24, 2001, 5:14:13 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part1
URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


Sybase Frequently Asked Questions

Sybase FAQ Home Page Adaptive Server Enterprise FAQ Adaptive Server Anywhere
FAQ Replication Server FAQ Search the FAQ
Sybase FAQ

Main Page


*Where can I get the latest release of this FAQ?
*What's new in this release?
*How can I help with the FAQ?
*Who do I tell about problems in the FAQ?
*Acknowledgements and Thanks
*Hall of Fame
*Disclaimer
*General Index

Main | ASE | ASA | REP | Search


-------------------------------------------------------------------------------

Where can I get the latest release of this FAQ?

International Sybase User Group

The main page for this site is http://www.isug.com/Sybase_FAQ. It is hosted
there by kind permission of the International Sybase User Group (http://
www.isug.com) as a service to the Sybase community.

To get a text version of this FAQ:


ftp://ftp.midsomer.org/pub/FAQ_txt_tar.Z

or

ftp://ftp.midsomer.org/pub/FAQ_txt.zip

If you want uncompressed versions of the various sections, they can be got
from ASE, ASA & REP.

To get the HTML for this FAQ:


ftp://ftp.midsomer.org/pub/FAQ_html_tar.Z

or

ftp://ftp.midsomer.org/pub/FAQ_html.zip

Sybase FAQ version 1.2, released 2000/06/07.

Back to Top
-------------------------------------------------------------------------------

What's new in this release?

Release 1.2

New FAQ items:

*A general index for all sections.
*Added a link to Thierry Antinolfi's DBA Devil site.
*Added a link to WinSQL site.
*Added a link to Imran Hussains site.
*After a little provoking Jim Tench (jim....@btinternet.com) wrote two
very good articles on connecting to Sybase using Open Client and ODBC.
*The start of a complete new replication section. Even at this early
stage it shows potential, even though I say so myself. Thanks to the
following for contributions and postings to sybase.public.rep-server:
+Manish I Shah
+Jason Webster
+Robert Waywell
+A special thanks also go to Bill Bell of Sybase and to the DBAs at
Trimac of Calgary. Without their willingness to trust me, none of it
would have gotten written since I would absolutely nothing about
replication, rather than the very little I know now!

Updated FAQ items:

*Added links to Eric Miner's (eric....@sybase.com) Techwave 1999
presentations to ASE Q1.5.1.
*Added a link to Jeffrey Garbus's (je...@soaringeagleltd.com) Techwave
1999 presentation on DBA tasks to ASE Q1.2.10.

Deleted FAQ items:

*None.

Release 1.1

The biggest addition for this release is the start of the FAQ for ASA. Its not
much, but definitely a start. Thanks to Leo Tohill for providing some good
ideas and reviewing the stuff I put together. Maybe some more people can be
persuaded to help.

I have also done a little bit of a re-number again. Sorry (especially to Todd
Boss) if this causes any hassle. I am trying to make it easier for people to
find stuff, please bear with me. I have made sure that all of the questions
have a unique number, so it is possible to refer to Qx.y.z of the ASE FAQ for
instance. I am working on building an automatic index and cross reference,
that will be part of the search stuff.

I spent quite a bit of time trying to correct problems with the search scripts
so that links and icons worked after a search. I know that I need to modify
and improve the search engine, which I will work on over the next couple of
months.

New FAQ items:

*Started a new section for platform specific ASE issues.
*Started a new section for issues to do with Open Client.
*Improved the ASA section. This now needs some really good stuff on
performance and tuning, trouble shooting, admin etc etc.
*Links to De Clarke's Sybtcl tools.
*Links to John Knox's website.
*Added text only version of site.

Updated FAQ items:

*Updated the links to Ed Barlow's site.
*Re-added numbers for FAQs throughout.
*Added a new method for the generation of sequence numbers thanks to John
Drevicky.
*Changes to the Hall of Fame!

Deleted FAQ items:

*Dead links to a number of User Groups.

Back to Top

Release 1.0.1

Section 9 was a bit of a mess, all of the hyperlinks had gone south. Fixed one
or two other links that I found were bad.

Back to Top

Release 1.0

I have updated quite a number of questions, correcting the odd typo and making
changes relevant to System 11.x throughout. This last process is continuing,
so do not get upset that your favourite 11.9 trick is not mentioned; get even:
tell me about it!

This is my first release as maintainer, and most of the work this time around
has been a restructure that will allow for the incorporation of FAQs for
Replication, Adaptive Server and the other Sybase products. (Please see "How
Can I Help").

Another big change is the location. When I took over maintaining the FAQ it
occurred to me that we were going to have yet another web address for it.
Since Pablo has left SGI, the original link pointing to Tom O'Connell's site
disappeared. Tom will add a link pointing to the new site, but what happens
when I hand over the reigns (after winning the lottery of course :-)? Michael
Peppler suggested the ISUG site as a permanent home, and this fitted in
perfectly with my own ideas. I must emphasize that this FAQ is not part of the
ISUG or Sybase. It is independent of both those bodies and is not officially
endorsed by either. Please see the Disclaimer.

I have updated quite a number of questions, correcting the odd typo and making
changes relevant to System 11.x where ever I found a reference. This last
process is continuing, so do not get angry that your favourite 11.9 trick is
not mentioned, get even: tell me about it!

New FAQ items:

*Row Level Locking
*CIS
*Thanks to Rob Verschoor for links to his site (Certification, Quick Ref
Guide and Dynamic SQL).
*Eric McGrane provided a useful means of extending a master database on a
full master device.
*Sean Kiely provided a good solution for recovering databases if the
transaction log is full and the system will not recover.
*Linked to Anthony Mandic's Sybase on Solaris paper.

Updated FAQ items:

*A million minor changes. No major rewrites.

Deleted FAQ items:

*Dead links to a number of User Groups.

I am sorry if you have links pointing to particular sections, but I felt that a
complete reorganisation was necessary in order to support the incorporation of
new FAQs for the other products. Feedback is always welcome of course!

Back to Top
-------------------------------------------------------------------------------

How can I help with the FAQ?

I have had offers from a couple of people to write sections (I will be in touch
to get your sections added for the next release, thanks), but if you feel that
you are in a position to add support for a section, or if you have some FAQs to
add, please let me know. This is a resource that we should all support, so
send me the stuff and I will include it.

Currently I am looking for maintainers of a section for Replication, Adaptive
Server Anywhere, IQ server, MPP Server and Open Server. I am not sure whether
to add a section for Omni Server. I sort of feel that since Omni has been
subsumed into ASE as CIS that any questions FAQs should really be incorporated
there. However, if you know of some good Omni gotchas or tips, whether they
are still there in CIS or not, please send them in. I certainly plan to have a
subsection of ASE dealing with CIS even if Omni does not get its own major
section. I also think that we need sections on some of the really new stuff.
Jaguar and the new engines also deserve a spot.

Another very useful way that you can help is in getting people to update their
links. I have seen lots of links recently, some still pointing to Pablo's
original, some pointing to Tom's site but referring to it as coming from the
SGI site.

Back to Top
-------------------------------------------------------------------------------

Who do I tell about problems in the FAQ?

The current maintainer is David Owen (do...@midsomer.org) and you can send
errors in the FAQ directly to me. If you have an FAQ item (both the question
and the answer) send it to syb...@midsomer.org and I will include it.

Do not send email to any of the officials at the ISUG, they are simply hosting
the FAQ and are not responsible for its contents.

Also, do not send email to Sybase, they are not responsible for the contents
either. See the Disclaimer.

Back to Top
-------------------------------------------------------------------------------

Acknowledgements and Thanks

Special thanks must go to the following people for their help in getting this
FAQ to where it is today.

*Pablo Sanchez for getting the FAQ off the ground in the first place and
for many years of dedicated work in maintaining it.

*Anthony Mandic (a...@peppler.org) for a million things. Patiently
answering questions in all of the Sybase news groups, without which most
beginners would be lost. For supporting and encouraging me in getting this
FAQ together and for providing some pretty neat graphics.

*The ISUG, especially Luc Van der Veurst (lu...@az.vub.ac.be) and Michael
Peppler (mpep...@peppler.org), for hosting this FAQ and providing support
in setting up the website.

*The members of the various news groups and mailing lists who, like
Anthony, provide unstinting support. The list is fairly long, but I think
that Bret Halford (br...@sybase.com) deserves a mention. If you go to Deja
News and do a search, he submits even more replies than Anthony.

Back to Top
-------------------------------------------------------------------------------

Hall of Fame

I am not sure how Pablo chose his select list, there is certainly no question
as to their inclusion. I know that there are a couple of awards that the ISUG
give out each year for the people that the ISUG members believe have
contributed most to the Sybase community that year. I think that this section
should honour those people that deserve an award each and every year. If you
know of a candidate, let me know and I will consider his or her inclusion.
Self nominations are not acceptable :-)

The following people have made it to the Sybase FAQ Hall of Fame:

*Michael Peppler (mpep...@peppler.org) For Sybperl and all of the other
tools of which he is author or instigator plus the ceaseless support that
he provides through countless mailing lists, newsgroups and directly via
email.

*Scott Gray (gr...@voicenet.com) Father of sqsh, much more than simply a
replacement for isql. How anyone developing or administering Sybase can
survive without it, I will never know.

*Pablo Sanchez (pa...@divideview.com) Pablo got the first web based FAQ
off the ground, wrote all of the first edition and then maintained it for a
number of years. He did a fantastic job, building a resource that is
worth its weight in gold.

Back to Top
-------------------------------------------------------------------------------

Disclaimer

This article is provided as is without any express or implied warranties.
Whilst every endeavour has been taken to ensure the accuracy of the information
contained in this article, the author, nor any of the contributors, assume
responsibility for errors or omissions, or for damages resulting from the use
of the information contained herein.

If you are not happy about performing any of the suggestions contained within
this FAQ, you are probably better off calling Sybase Technical Support.

Back to Top
-------------------------------------------------------------------------------

ASE

Basic ASE Admin


1.1.1 What is SQL Server and ASE anyway?
1.1.2 How do I start/stop SQL Server when the CPU reboots?
1.1.3 How do I move tempdb off of the master device?
1.1.4 How do I correct timeslice -201?
1.1.5 The how's and why's on becoming a Certified Sybase
Professional DBA (CSPDBA)?
1.1.6 RAID and Sybase
1.1.7 How to swap a db device with another
1.1.8 Server naming and renaming
1.1.9 How do I interpret the tli strings in the interface file?
1.1.10How can I tell the datetime my Server started?
1.1.11Raw partitions or regular files?
1.1.12Is Sybase Y2K (Y2000) compliant?
1.1.13How Can I Run the SQL Server Upgrade Manually?

User Database Administration


1.2.1 Changing varchar(m) to varchar(n)
1.2.2 Frequently asked questions on Table partitioning
1.2.3 How do I manually drop a table?
1.2.4 Why not create all my columns varchar(255)?
1.2.5 What's a good example of a transaction?
1.2.6 What's a natural key?
1.2.7 Making a Stored Procedure invisible
1.2.8 Saving space when inserting rows monotonically
1.2.9 How to compute database fragmentation
1.2.10 Tasks a DBA should do...
1.2.11 How to implement database security
1.2.12 How to shrink a database
1.2.13 How do I turn on auditing of all SQL text sent to the server

Advanced Administration


1.3.1 How do I clear a log suspend'd connection?
1.3.2 What's the best value for cschedspins?
1.3.3 What traceflags are available?
1.3.4 How do I use traceflags 5101 and 5102?
1.3.5 What is cmaxpktsz good for?
1.3.6 What do all the parameters of a buildmaster -d<device> -yall
mean?
1.3.7 What is CIS and how do I use it?
1.3.8 If the master device is full how do I make the master database
bigger?

General Troubleshooting

1.4.1 How do I turn off marked suspect on my database?
1.4.2 On startup, the transaction log of a database has filled and
recovery has suspended, what can I do?

Performance and Tuning


1.5.1 What are the nitty gritty details on Performance and Tuning?
1.5.2 What is best way to use temp tables in an OLTP environment?
1.5.3 What's the difference between clustered and non-clustered
indexes?
1.5.4 Optimistic versus Pessimistic locking?
1.5.5 How do I force an index to be used?
1.5.6 Why place tempdb and log on low numbered devices?
1.5.7 Have I configured enough memory for ASE/SQL Server?
1.5.8 Why should I use stored procedures?
1.5.9 I don't understand showplan's output, please explain.
1.5.10 Poor man's sp_sysmon.
1.5.11 View MRU-LRU procedure cache chain.
1.5.12 Improving Text/Image Type Performance

Platform Specific Issues


2.1 How to Start ASE on Remote NT Servers

DBCCs


3.1 How do I set TS Role in order to run certain DBCCs...?
3.2 What are some of the hidden/trick DBCC commands?
3.3 The unauthorized DBCC list with doco - see Q11.4.1
3.4 Fixing a Munged Log
3.5 Another site with DBCC commands - see Q11.4.2

isql


4.1 How do I hide my password using isql?
4.2 How do I remove row affected and/or dashes when using isql?
4.3 How do I pipe the output of one isql to another?

bcp


5.1 How do I bcp null dates?
5.2 Can I use a named pipe to bcp/dump data out or in?
5.3 How do I exclude a column?

SQL Fundamentals


6.1.1 Are there alternatives to row at a time processing?
6.1.2 When should I execute an sp_recompile?
6.1.3 What are the different types of locks and what do they mean?
6.1.4 What's the purpose of using holdlock?
6.1.5 What's the difference between an update in place versus a
deferred update? - see Q1.5.9
6.1.6 How do I find the oldest open transaction?
6.1.7 How do I check if log truncation is blocked?
6.1.8 The timestamp datatype
6.1.9 Stored Procedure Recompilation and Reresolution
6.1.10 How do I manipulate binary columns?
6.1.11 Does Sybase support Row Level Locking?
6.1.12 Why do my page locks not get escalated to a table lock after 200
locks?

SQL Advanced


6.2.1 How to emulate the Oracle decode function/crosstab
6.2.2 How to implement if-then-else within a select-clause.
6.2.3 deleted due to copyright hassles with the publisher
6.2.4 How to pad with leading zeros an int or smallint.
6.2.5 Divide by zero and nulls.
6.2.6 Convert months to financial months.
6.2.7 Hierarchy traversal - BOMs.
6.2.8 Is it possible to call a UNIX command from within a stored
procedure or a trigger?
6.2.9 Information on Identities and Rolling your own Sequential Keys
6.2.10 How can I execute dynamic SQL with ASE/SQL Server?

Open Client


7.1 What is Open Client?
7.2 What is the difference between DB-lib and CT-lib?
7.3 What is this TDS protocol?
7.4 I have upgraded to MS SQL Server 7.0 and can no longer connect from
Sybase's isql.
7.5 The Basics of Connecting to Sybase
7.6 Connecting to Sybase using ODBC

Freeware


9.1 sp_freedevice - lists device, size, used and free.
9.2 sp_whodo - augments sp_who by including additional columns: cpu,
I/O...
9.3 SQL and sh(1)to dynamically generate a dump/load database
command.
9.4 SybPerl - Perl interface to Sybase.
9.5 dbschema.pl - SybPerl script to take a logical snap of a
database.
9.6 Sybtcl - TCL interface to Sybase.
9.7 Augmented system stored procedures.
9.8 Examples of Open Client and Open Server programs -- see Q11.4.14
.
9.9 SQL to determine the space used for an index.
9.10 xsybmon - an X interface to sp_monitor
9.11 sp_dos - This procedure graphically displays the scope of a
object
9.12 sqsh - a superset of dsql with local variables, redirection,
pipes and all sorts of goodies.
9.13 sp_getdays - returns days in current month.
9.14 ddl_insert.pl - creates insert DDL for a table.
9.15 sp_ddl_create_table - creates DDL for all user tables in the
current database
9.16 int.pl - converts interfaces file to tli
9.17 How to access a SQL Server using Linux see also Q11.4.6
9.18 sp__revroles - creates DDL to sp_role a mirror of your SQL Server
9.19 sp__rev_configure - creates DDL to sp_configure a mirror of your
SQL Server
9.20 sp_servermap - overview of your SQL Server
9.21 sp__create_crosstab - simplify crosstable queries
9.22 update statistics script
9.23 lightweight Sybase Access via Win95/NT
9.24 Sybase on LinuxLinux Penguin
9.25 How to configure shared-memory for Linux
9.26 sp_spaceused_table
9.27 sybdump - a Tcl script for dumping a database schema to disk

Miscellany


12.1 What can Sybase IQ do for me?
12.2 Net-review of Sybase books
12.3 email lists
12.4 Finding Information at Sybase

ASA


0.0 Preamble
0.1 What is ASA?
0.2 On what platforms is ASA supported?
0.3 What applications is ASA good for?
0.4 When would I choose ASA over ASE?
0.5 Does ASA Support Replication?
0.6 What is ASA UltraLite?
0.7 Links for further information

REP

Introduction to Replication Server


1.1 Introduction
1.2 Replication Server Components
1.3 What is the Difference Between SQL Remote and Replication Server?

Replication Server Introduction


2.1 How can I improve throughput?
2.2 Where should I install replication server?
2.3 Using large raw partitions with Replication Server on Unix.
2.4 How to replicate col = col + 1

Troubleshooting Replication Server


3.1 Why am I running out of locks on the replicate side?
3.2 Someone was playing with replication and now the transaction log on
OLTP is filling.

Additional Information/Links


4.1 Links
4.2 Newsgroups

David Owen

unread,
Jan 24, 2001, 5:14:14 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part3

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


Sybase Frequently Asked Questions =
=20

=
=20

Sybase FAQ Home Page Adaptive Server Enterprise FAQ Adaptive Server An=
ywhere =20
FAQ Repserver FAQ Search the FAQ =
=20
[bar] =
=20

=
=20

Sybase Replication Server =
=20
=
=20

=20
=20
1. Introduction to Replication Server
2. Replication Server Administration
3. Troubleshooting Replication Server
4. Additional Information/Links


=20
Introduction to Replication Server =
=20
=
=20
=20
=20


1.1 Introduction
1.2 Replication Server Components

1.3 What is the Difference Between SQL Remote and Replication Serve=
r?


Thanks go to Manish I Shah for major help with this introduction.

next prev ASE FAQ
-----------------------------------------------------------------------=
--------

1.1 Introduction

-----------------------------------------------------------------------=
--------

What is Replication Server

Replication Server moves transactions (insert, updates and deletes) at =
the
table level from a source dataserver to one or more destination dataser=
vers.
The dataserver could be ASE or other major DBMS flavour (including DB2,=

Informix, Oracle). The source and destinations need not be of the same =
type.

What can it do ?
=20
*Move data from one source to another.
*Move only a subset of data from source to destination. So, you can =91=

subscribe=92 to a subset of data, or a subset of the columns, in th=
e source
table, e.g. select * from clients where state =3D =91NY=92
*Manipulation/transformation of data when moving from source to
destination. E.g. it can map data from a data-type in DB2 to an equ=
ivalent
in Sybase.*
*Provide a warm-standby system. Can be incorporated with Open Switch to=

provide a fairly seamless fail-over environment.
*Merge data from several source databases into one destination database=

(could be for a warehouse type environment for example).
*Move data through a complicated network down to branch offices, say, o=
nly
sending the relevant data to each branch.

(* This is one of Sybase replication's real strengths, the ability to d=
efine
function string classes which allow the conversion of statements from o=
ne SQL
dialect to match the dialect of the destination machine. Ed)

How soon does the data move

The data moves asynchronously. The time it takes to reach the destinati=
on
depends on the size of your transaction, level of activity in that part=
icular
database (a database as in Sybase systems), the length of the chain (on=
e or
more replication servers that the transaction has to pass through to re=
ach the
destination), the thickness of pipe (network), how busy your replicatio=
n server
is etc. Usually, on a LAN, for small transactions, this is about a seco=
nd.

Back to top
-----------------------------------------------------------------------=
--------

1.2 Replication Server Components

-----------------------------------------------------------------------=
--------

Basic

Primary Dataserver

The source of data where client applications enter/delete and modify da=
ta. As
mentioned before, this need not be ASE, it can be Microsoft SQL Server,=
Oracle,
DB2, Informix. (I know that I should get a complete list.)

Replication Agent/Log Transfer Manager

Log Transfer Manager (LTM) is a separate program/process which reads
transaction log from the source server and transfers them to the replic=
ation
server for further processing. With ASE 11.5, this has become part of A=
SE and
is now called the Replication Agent. However, you still need to use an =
LTM for
non-ASE sources. I imagine there is a version of LTM for each kind of s=
ource
(DB2, Informix, Oracle etc). When replication is active, you see one
connection per each replicated database in the source dataserver (sp_wh=
o).

Replication Server (s)

The replication server is an Open Server/Open Client application. The s=
erver
part receives transactions being sent by either the source ASE or the s=
ource
LTM. The client part sends these transactions to the target server whic=
h could
be another replication server or the final dataserver. As far as I know=
, the
server does not include the client component of any of the other DBMSes=
out of
the box.

Replicate (target) Dataserver

Server in which the final replication server (in the queue) will repeat=
the
transaction done on the primary. You will see a connection, one for eac=
h target
database, in the target dataserver when the replication server is activ=
ely
transferring data (when idle, the replication server disconnects or fad=
es out
in replication terminology).

Back to top
-----------------------------------------------------------------------=
--------

1.3 What is the Difference Between Replication Server and SQL Remote?

-----------------------------------------------------------------------=
--------

Both SQL Remote and Replication Server perform replication. SQL Remote =
was
originally part of the Adaptive Server Anywhere tool kit and is intende=
d for
intermittent replication. (The classic example is that of a salesman
connecting on a daily basis to upload sales and download new prices and=

inventory.) Replication Server is intended for near real-time replicati=
on
scenarios.

Back to top
-----------------------------------------------------------------------=
--------

next prev ASE FAQ

Replication Server Administration =
=20
=
=20
=20
=20
=20


2.1 How can I improve throughput?
2.2 Where should I install replication server?
2.3 Using large raw partitions with Replication Server on Unix.

2.4 How to replicate col =3D col + 1

=20
=20

next prev ASE FAQ
-----------------------------------------------------------------------=
--------

2.1 How can I improve throughput?

-----------------------------------------------------------------------=
--------

Check the Obvious

First, ensure that you are only replicating those parts of the system t=
hat need
to be replicated. Some of this is obvious. Don't replicate any table th=
at
does not need to be replicated. Check that you are only replicating the=

columns you need. Replication is very sophisticated and will allow you =
to
replicate both a subset of the columns as well as a subset of the rows.=


Replicate Minimum Columns

Once the replication is set up and synchronised, it is only necessary t=
o
replicate those parts of the primary system that actually change. You a=
re only
replicating those rows and columns that need to be replicated, but you =
only
need to replicate the actual changes. Check that each replication defin=
ition
is defined using the clause:

create replication definition rep_def_name
with primary...
...
replicate minimal columns

Second Replication Server

This might be appropriate in a simple environment on systems with spare=
cycles
and limited space on the network. When Sybase replicates from a primary=
to a
replicate using only one replication server the data is transferred acr=
oss the
network uncompressed. However, the communication between two replicatio=
n
servers is compressed. By installing a second replication server it is
possible to dramatically reduce the bandwidth needed to replicate your =
data.

Dedicated Network Card

Obviously, if replication is sharing the same network resources that al=
l of the
clients are using, there is the possibility for a bottleneck if the net=
work
bandwidth is close to saturation. If a second replication server is not=
going
to cut it since you already have one or there are no spare cycles, then=
a
second network card may be the answer.

First, you will need to configure ASE to listen on two network connecti=
ons.
This is relatively straightforward. There is no change to the client
configuration. They all continue to talk to Sybase using the same conne=
ction.
When defining the replication server, ensure that the interfaces/sql.in=
i entry
that it uses only has the second connection in it. This may involve som=
e
jiggery pokery with environment variables, but should be possible, even=
on NT!
You need to be a little careful with network configuration. Sybase will=

communicate with the two servers on the correct address, but if the und=
erlying
operating system believes that both clients and repserver can be servic=
ed by
the same card, then it will use the first card that it comes to. So, if=
you
had the situation that all of the clients, ASE and the replication serv=
er were
on 192.168.1.0, and the host running ASE had two cards onto this same s=
egment,
then it would choose to route all packets through the first card. OK, s=
o this
is a very simplistic error to correct, but similar things can happen wi=
th more
convoluted and, superficially, better thought out configurations.
+---------+ +-----------+ +-----------=
+
| |--> NE(1) --> All Clients... | | | =
|
| Primary | | repserver | | replicate =
|
| |--> NE(2) --------------------->| |-->| =
|
| | | | | =
|
+---------+ +-----------+ +-----------=
+

So, configure NE(1) to be on 192.168.1.0, say, and NE(2) to be on 192.1=
68.2.0
and all should be well. OK, so my character art is not perfect, but I t=
hink
that you get the gist!

Back to top
-----------------------------------------------------------------------=
--------

2.2 Where should I install replication server?

-----------------------------------------------------------------------=
--------

A seemingly trivial question, but one that can cause novices a bit of w=
orry.

There are three answers: on the primary machine, on the replicate machi=
ne or on
a completely separate machine. There is no right answer, and if you are=
doing
an initial install it probably pays to consider the future, consider th=
e
proposed configuration and have a look at the load on the available mac=
hines.

It is probably fair to say that replication is not power hungry but nei=
ther is
it free. If the primary is only just about coping with its current load=
, then
it might be as well looking into hosting it on another machine. The arg=
ument
applies to the replicate. If you think that network bandwidth may be an=
issue,
and you may have to add a second replication server, you may be better =
off
starting with repserver running on the primary. It is marginally easier=
to add
a repserver to an existing configuration if the first repserver is on t=
he
primary.

Remember that a production replication server on Unix will require raw =
devices
for the stable devices and that these can be more than 2GB in size. If =
you are
restricted in the number of raw partitions you have available on a part=
icular
machine, then this may have a bearing. See Q2.3.

Installing replication server on its own machine will, of course, intro=
duce all
sorts of problems of its own, as well as answering some. The load on th=
e
primary or the replicate is reduced considerably, but you are definitel=
y going
to add some load to the network. Remember that ASE->Rep and Rep->ASE is=

uncompressed. It is only Rep->Rep that is compressed.

Back to top
-----------------------------------------------------------------------=
--------

2.3 Using large raw partitions with Replication Server on Unix.

-----------------------------------------------------------------------=
--------

It is a good practice with production installations of Replication Serv=
er on
Unix that you use raw partitions for the stable devices. This is for ju=
st the
same reason that production ASE's use raw partitions. Raw devices can b=
e a
maximum of 2GB with replication server up to release 11.5. (I have not =
checked
12.)

In order to utilise a raw partition that is greater than 2GB in size yo=
u can do
the following (remember all of the cautionary warnings about trying thi=
s sort
of stuff out in development first!):
add partition firstpartition on '/dev/rdsk/c0t0d0s0' with size 2024
go
add partition secondpartition on '/dev/rdsk/c0t0d0s0' with size 2024
starting at 2048
go

Notice that the initial partition is sized at 2024MB and not 2048. I ha=
ve not
found this in the documentation, but replication certainly seems to hav=
e a
problem allocating a full 2GB. Interestingly, do the same operation thr=
ough
Rep Server Manager and Sybase central caused no problems at all.

Back to top
-----------------------------------------------------------------------=
--------

2.4 How to replicate col =3D col + 1

-----------------------------------------------------------------------=
--------

Firstly. While the rule that you never update a primary key may be a
philosophical choice in a non-replicated system, it is an architectural=

requirement of a replicated system.

If you use simple data replication, and your primary table is:
id
---
1
2
3

and you issue a:
update table set id=3Did+1

Rep server will do this in the replicate:
begin tran
update table set id=3D2 where id=3D1
update table set id=3D3 where id=3D2
update table set id=3D4 where id=3D3
commit tran

Hands up all who can see a bit of a problem with this! Remember, repser=
ver
doesn't replicate statements, it replicates the results of statements.

One way to perform this update is to build a stored procedure on both s=
ides
that executes the necessary update and replicate the stored procedure c=
all.

Back to top
-----------------------------------------------------------------------=
--------

next prev ASE FAQ

Replication Server Trouble Shooting =
=20
=
=20
=20
=20
=20


3.1 Why am I running out of locks on the replicate side?

3.2 Someone was playing with replication and now the transaction lo=
g on
OLTP is filling.

=20
=20

next prev ASE FAQ
-----------------------------------------------------------------------=
--------

3.1 Why am I running out of locks on the replicate side?

-----------------------------------------------------------------------=
--------

Sybase replication works by taking each transaction that occurs in the =
primary
dataserver and applying to the replicate. Since replication works on th=
e
transaction log, a single, atomic, update on the primary side that upda=
tes a
million rows will be translated into a million single row updates. This=
may
seem very strange but is a simple consequence of how it works. On the p=
rimary,
this million row update will attempt to escalate the locks that it has =
taken
out to an exclusive table lock. However, on the replicate side each row=
is
updated individually, much as if they were being updated within a curso=
r loop.
Now, Sybase only tries to escalate locks from a single atomic statement=
(see
ASE Qx.y), so it will never try to escalate the lock. However, since th=
e
updates are taking place within a single transaction, Sybase will need =
to take
out enough page locks to lock the million rows.

So, how much should you increase the locks parameter on the replicate s=
ide? A
good rule of thumb might be double it or add 40,000 whichever is the la=
rger.
This has certainly worked for us.

Back to top
-----------------------------------------------------------------------=
--------

3.2 Someone was playing with replication and now the transaction log on=
OLTP
is filling.

-----------------------------------------------------------------------=
--------

Once replication has been configured, ASE adds another marker to the
transaction log. The first marker is the conventional one that marks wh=
ich
transactions have had their data written to disk. The second is there t=
o
ensure that the transactions have also been replicated. Clearly, if som=
eone
installed replication and did not clean up properly after themselves, t=
his
marker will still be there and consequently the transaction log will be=
filling
up. If you are certain that replication is not being used on your syste=
m, you
can disable the secondary truncation marker with the following commands=
:

1> sp_role "grant", sybase_ts_role, sa
2> go
1> set role sybase_ts_role on
2> go
1> dbcc dbrepair(dbname, ltmignore)
2> go
1> sp_role "revoke", sybase_ts_role, sa
2> go

This scenario is also very common if you load a copy of your replicated=

production database into development.

Back to top
-----------------------------------------------------------------------=
--------

next prev ASE FAQ

Additional Information/Links =
=20
=
=20
=20
=20
=20
4.1 Links
4.2 Newsgroups

=20
=20

next prev ASE FAQ
-----------------------------------------------------------------------=
--------

4.1 Links

-----------------------------------------------------------------------=
--------

Thierry Antinolfi has a replication FAQ at his site http://pro.wanadoo.=
fr/
dbadevil that covers a lot of good stuff.

Rob Verschoor has a 'Replication Server Tips & Tricks' section on his s=
ite, as
well as an indispensible quick reference guide!

Back to top
-----------------------------------------------------------------------=
--------

4.2 Newsgroups

-----------------------------------------------------------------------=
--------

There are a number of newsgroups that can deal with questions. Sybase h=
ave
several in their own forums area.

For Replication Server:
=20
=20
sybase.public.rep-server
sybase.public.rep-agent

for SQL Remote and the issues of replicating with ASA:
=20
=20
sybase.public.sqlanywhere.replication

and of course, there is always the ubiquitous
=20
=20
comp.databases.sybase.

Back to top
-----------------------------------------------------------------------=
--------

next prev ASE FAQ

David Owen

unread,
Jan 24, 2001, 5:14:13 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part2

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


Sybase Frequently Asked Questions

Sybase FAQ Home Page Adaptive Server Enterprise FAQ Adaptive Server Anywhere

FAQ Repserver FAQ Search the FAQ

[bar]

Adaptive Server Anywhere



0.0 Preamble
0.1 What is ASA?
0.2 On what platforms is ASA supported?
0.3 What applications is ASA good for?
0.4 When would I choose ASA over ASE?
0.5 Does ASA Support Replication?
0.6 What is ASA UltraLite?
0.7 Links for further information

-------------------------------------------------------------------------------

0.0 Preamble

I make no claims to be an ASA expert! I am beginning to use it more and more,
and as I use it I am able to add stuff with more authority to this list. All
of what is here is very general. I am pressing people to help write some more
meaty parts. There is nothing here on how to recover from crashes that must
happen, or equivalent sections for those in the the ASE part. Performance and
Tuning would be a good section! If anyone out there knows of a good ASA faq,
then send it to me, and I will get it added. This is a resource that will help
us all. Come on all you TeamSybase/TeamPowerbuilder people, you must know
something on the subject <g>. It is unlikely that this is going to grow into a
particularly useful resource unless I get some serious help!
-------------------------------------------------------------------------------

0.1 What is ASA?

ASA is a fully featured DBMS with transactional integrity, automatic rollback
and recovery, declarative RI, triggers and stored procedures.

While it comes out of Sybase's "Mobile and Embedded" division, it is NOT
limited to "small, desktop applications". There are many ASA implementations
supporting over 100 concurrent users. While not as scalable as ASE, it does
offer SMP support and versions for various Unix flavors as well as Netware and
NT/w2k. Multi-gigabyte databases are commonly used.

ASA offers a number of features that are not to be found in ASE:

*row level BEFORE and AFTER triggers
*long varchar and BLOB up to gigabytes
*varchar up to 32k
*declarative RI with cascade actions
*all character and decimal data is stored var-len, using only the space
it needs

ASA is designed to be low-maintenance:

*File size automatically grows
*self-tuning
*re-uses space from deletes

ASA also includes:

*Java stored procs
*Stored procedure debugger (I am not sure what sort of debugger, just that
it has one.)

-------------------------------------------------------------------------------

0.2 On what platforms is ASA supported?

Lots!

*NT/2000
*Windows 98/95
*Windows 3.1
*DOS
*Linux
*Solaris?

I suspect that the list is longer, but do not know for certain. Does ASA run
on Windows CE or is that UltraLite only?
-------------------------------------------------------------------------------

0.3 What applications is ASA good for?

ASA seems to have a number of niches. It is generally good at OLTP and can be
used as a basis for a general database project. There are certainly examples
of implementations supporting 100 or more users.

A major area for ASA databases is with applications that need to distribute the
database with the application as a general storage area for internal
components, but the database is not a major part of the deliverable. Sybase
themselves have done this with the IQ meta data storage. Prior to release 11
of IQ, the meta data was stored in an ASE database. Now, with IQ 12, the meta
data has moved to being stored in ASA. This makes the installation of IQ into
production environments much simpler.

ASA has excellent ODBC support, which makes it very attractive to tools
oriented towards ODBC.
-------------------------------------------------------------------------------

0.4 When would I choose ASA over ASE?

*Ease of administration,e.g., self-tuning optimizer, db file is an OS file
(not partition).
*Lower footprint - runs on "smaller" machines.
*Lower cost, ASA is definitely cheaper than ASE on the same platform.
*Want to use SQL Remote (asynchronous replication)
*More complete SQL92 implementation.

-------------------------------------------------------------------------------

0.5 Does ASA Support Replication?

In short, yes. ASA comes with SQL Remote, an asynchronous replication server.
SQL Remote is intended to be used in applications where the replication is
not intended to happen immediately. In fact, it may well be hours or even days
before the databases are synchronised. This makes it ideal for the roaming
salesman type apps where the guy is on the road all day and then dials in from
home, hotel or beach front to resynch his pay cheque^W^Wprice list with the
master server.
-------------------------------------------------------------------------------

0.6 What is ASA UltraLite?

UltraLite is a version of ASA that runs on handheld devices

Deployment
Windows 95/98, NT, 2000, CE
Palm Computing platform
WindRiver VxWorks
DOS
Symbian EPOC
-------------------------------------------------------------------------------

0.7 I'm interested, where can I find more info?

Breck Carter has a very useful page at http://www.bcarter.com/home.html that is
full of detail.

General information can be found about ASA at:

http://www.sybase.com/products/anywhere/sql_productinfo.html

It is a bit of a marketing page but there are some pointers to white papers
etc.

A very well written reviewers guide can be found at

http://www.sybase.com/products/anywhere/sas_reviewers_guide.html

The page has a link to a pdf document that contains lots of useful information.

David Owen

unread,
Jan 24, 2001, 5:14:14 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part5

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


User Database Administration



1.2.1 Changing varchar(m) to varchar(n)
1.2.2 Frequently asked questions on Table partitioning
1.2.3 How do I manually drop a table?
1.2.4 Why not create all my columns varchar(255)?
1.2.5 What's a good example of a transaction?
1.2.6 What's a natural key?
1.2.7 Making a Stored Procedure invisible
1.2.8 Saving space when inserting rows monotonically
1.2.9 How to compute database fragmentation
1.2.10 Tasks a DBA should do...
1.2.11 How to implement database security
1.2.12 How to shrink a database
1.2.13 How do I turn on auditing of all SQL text sent to the server

next prev ASE FAQ
-------------------------------------------------------------------------------

1.2.1: Changing varchar(m) to varchar(n)

-------------------------------------------------------------------------------

Before you start:

select max(datalength(column_name))
from affected_table

In other words, please be sure you're going into this with your head on
straight.

How To Change System Catalogs

This information is Critical To The Defense Of The Free World, and you would be
Well Advised To Do It Exactly As Specified:
use master
go
sp_configure "allow updates", 1
go
reconfigure with override /* System 10 and below */
go
use victim_database
go
select name, colid
from syscolumns
where id = object_id("affected_table")
go
begin tran
go
update syscolumns
set length = new_value
where id = object_id("affected_table")
and colid = value_from_above
go
update sysindexes
set maxlen = maxlen + increase/decrease?
where id=object_id("affected_table")
and indid = 0
go
/* check results... cool? Continue... else rollback tran */
commit tran
go
use master
go
sp_configure "allow updates", 0
go
reconfigure /* System 10 and below */
go

Return to top
-------------------------------------------------------------------------------

1.2.2: FAQ on partitioning

-------------------------------------------------------------------------------

Index of Sections

*What Is Table Partitioning?
+Page Contention for Inserts
+I/O Contention
+Caveats Regarding I/O Contention

*Can I Partition Any Table?
+How Do I Choose Which Tables To Partition?

*Does Table Partitioning Require User-Defined Segments?
*Can I Run Any Transact-SQL Command on a Partitioned Table?
*How Does Partition Assignment Relate to Transactions?
*Can Two Tasks Be Assigned to the Same Partition?
*Must I Use Multiple Devices to Take Advantage of Partitions?
*How Do I Create A Partitioned Table That Spans Multiple Devices?
*How Do I Take Advantage of Table Partitioning with bcp in?
*Getting More Information on Table Partitioning

What Is Table Partitioning?

Table partitioning is a procedure that creates multiple page chains for a
single table.

The primary purpose of table partitioning is to improve the performance of
concurrent inserts to a table by reducing contention for the last page of a
page chain.

Partitioning can also potentially improve performance by making it possible to
distribute a table's I/O over multiple database devices.

Page Contention for Inserts

By default, SQL Server stores a table's data in one double-linked set of pages
called a page chain. If the table does not have a clustered index, SQL Server
makes all inserts to the table in the last page of the page chain.

When a transaction inserts a row into a table, SQL Server holds an exclusive
page lock on the last page while it inserts the row. If the current last page
becomes full, SQL Server allocates and links a new last page.

As multiple transactions attempt to insert data into the table at the same
time, performance problems can occur. Only one transaction at a time can obtain
an exclusive lock on the last page, so other concurrent insert transactions
block each other.

Partitioning a table creates multiple page chains (partitions) for the table
and, therefore, multiple last pages for insert operations. A partitioned table
has as many page chains and last pages as it has partitions.

I/O Contention

Partitioning a table can improve I/O contention when SQL Server writes
information in the cache to disk. If a table's segment spans several physical
disks, SQL Server distributes the table's partitions across fragments on those
disks when you create the partitions.

A fragment is a piece of disk on which a particular database is assigned space.
Multiple fragments can sit on one disk or be spread across multiple disks.

When SQL Server flushes pages to disk and your fragments are spread across
different disks, I/Os assigned to different physical disks can occur in
parallel.

To improve I/O performance for partitioned tables, you must ensure that the
segment containing the partitioned table is composed of fragments spread across
multiple physical devices.

Caveats Regarding I/O Contention

Be aware that when you use partitioning to balance I/O you run the risk of
disrupting load balancing even as you are trying to achieve it. The following
scenarios can keep you from gaining the load balancing benefits you want:

*You are partitioning an existing table. The existing data could be
sitting on any fragment. Because partitions are randomly assigned, you run
the risk of filling up a fragment. The partition will then steal space from
other fragments, thereby disrupting load balancing.
*Your fragments differ in size.
*The segment maps are configured such that other objects are using the
fragments to which the partitions are assigned.
*A very large bcp job inserts many rows within a single transaction.
Because a partition is assigned for the lifetime of a transaction, a huge
amount of data could go to one particular partition, thus filling up the
fragment to which that partition is assigned.


Can I Partition Any Table?


No. You cannot partition the following kinds of tables:

1. Tables with clustered indexes
2. SQL Server system tables
3. Work tables
4. Temporary tables
5. Tables that are already partitioned. However, you can unpartition and then
re-partition tables to change the number of partitions.


How Do I Choose Which Tables To Partition?


You should partition heap tables that have large amounts of concurrent insert
activity. (A heap table is a table with no clustered index.) Here are some
examples:

1. An "append-only" table to which every transaction must write
2. Tables that provide a history or audit list of activities
3. A new table into which you load data with bcp in. Once the data is loaded
in, you can unpartition the table. This enables you to create a clustered
index on the table, or issue other commands not permitted on a partition
table.


Does Table Partitioning Require User-Defined Segments?


No. By design, each table is intrinsically assigned to one segment, called the
default segment. When a table is partitioned, any partitions on that table are
distributed among the devices assigned to the default segment.

In the example under "How Do I Create A Partitioned Table That Spans Multiple
Devices?", the table sits on a user-defined segment that spans three devices.

Can I Run Any Transact-SQL Command on a Partitioned Table?


No. Once you have partitioned a table, you cannot use any of the following
Transact-SQL commands on the table until you unpartition it:

1. create clustered index
2. drop table
3. sp_placeobject
4. truncate table
5. alter table table_name partition n


How Does Partition Assignment Relate to Transactions?


A user is assigned to a partition for the duration of a transaction. Assignment
of partitions resumes with the first insert in a new transaction. The user
holds the lock, and therefore partition, until the transaction ends.

For this reason, if you are inserting a great deal of data, you should batch it
into separate jobs, each within its own transaction. See "How Do I Take
Advantage of Table Partitioning with bcp in?", for details.

Can Two Tasks Be Assigned to the Same Partition?


Yes. SQL Server randomly assigns partitions. This means there is always a
chance that two users will vie for the same partition when attempting to insert
and one would lock the other out.

The more partitions a table has, the lower the probability of users trying to
write to the same partition at the same time.

Must I Use Multiple Devices to Take Advantage of Partitions?


It depends on which type of performance improvement you want.

Table partitioning improves performance in two ways: primarily, by decreasing
page contention for inserts and, secondarily, by decreasing i/o contention.
"What Is Table Partitioning?" explains each in detail.

If you want to decrease page contention you do not need multiple devices. If
you want to decrease i/o contention, you must use multiple devices.

How Do I Create A Partitioned Table That Spans Multiple Devices?


Creating a partitioned table that spans multiple devices is a multi-step
procedure. In this example, we assume the following:

*We want to create a new segment rather than using the default segment.
*We want to spread the partitioned table across three devices, data_dev1,
data_dev2, and data_dev3.

Here are the steps:

1. Define a segment:

sp_addsegment newsegment, my_database,data_dev1

2. Extend the segment across all three devices:

sp_extendsegment newsegment, my_database, data_dev2
sp_extendsegment newsegment, my_database, data_dev3

3. Create the table on the segment:

create table my_table
(names, varchar(80) not null)
on newsegment

4. Partition the table:

alter table my_table partition 30


How Do I Take Advantage of Table Partitioning with bcp in?


You can take advantage of table partitioning with bcp in by following these
guidelines:

1. Break up the data file into multiple files and simultaneously run each of
these files as a separate bcp job against one table.

Running simultaneous jobs increases throughput.

2. Choose a number of partitions greater than the number of bcp jobs.

Having more partitions than processes (jobs) decreases the probability of
page lock contention.

3. Use the batch option of bcp in. For example, after every 100 rows, force a
commit. Here is the syntax of this command:

bcp table_name in filename -b100

Each time a transaction commits, SQL Server randomly assigns a new
partition for the next insert. This, in turn, reduces the probability of
page lock contention.


Getting More Information on Table Partitioning


For more information on table partitioning, see the chapter on controlling
physical data placement in the SQL Server Performance and Tuning Guide.

Return to top
-------------------------------------------------------------------------------

1.2.3: How to manually drop a table

-------------------------------------------------------------------------------

Occasionally you may find that after issuing a drop table command that the SQL
Server crashed and consequently the table didn't drop entirely. Sure you can't
see it but that sucker is still floating around somewhere.

Here's a list of instructions to follow when trying to drop a corrupt table:

1.
sp_configure allow, 1
go
reconfigure with override
go


2. Write db_id down.
use db_name
go
select db_id()
go

3. Write down the id of the bad_table:
select id
from sysobjects
where name = bad_table_name
go

4. You will need these index IDs to run dbcc extentzap. Also, remember that if
the table has a clustered index you will need to run extentzap on index
"0", even though there is no sysindexes entry for that indid.
select indid
from sysindexes
where id = table_id
go

5. This is not required but a good idea:
begin transaction
go

6. Type in this short script, this gets rid of all system catalog information
for the object, including any object and procedure dependencies that may be
present.

Some of the entries are unnecessary but better safe than sorry.
declare @obj int
select @obj = id from sysobjects where name =
delete syscolumns where id = @obj
delete sysindexes where id = @obj
delete sysobjects where id = @obj
delete sysprocedures where id in
(select id from sysdepends where depid = @obj)
delete sysdepends where depid = @obj
delete syskeys where id = @obj
delete syskeys where depid = @obj
delete sysprotects where id = @obj
delete sysconstraints where tableid = @obj
delete sysreferences where tableid = @obj
delete sysdepends where id = @obj
go

7. Just do it!
commit transaction
go

8. Gather information to run dbcc extentzap:
use master
go
sp_dboption db_name, read, true
go
use db_name
go
checkpoint
go

9. Run dbcc extentzap once for each index (including index 0, the data level)
that you got from above:
use master
go
dbcc traceon (3604)
go
dbcc extentzap (db_id, obj_id, indx_id, 0)
go
dbcc extentzap (db_id, obj_id, indx_id, 1)
go

Notice that extentzap runs twice for each index. This is because the
last parameter (the sort bit) might be 0 or 1 for each index, and you
want to be absolutely sure you clean them all out.

10. Clean up after yourself.
sp_dboption db_name, read, false
go
use db_name
go
checkpoint
go
sp_configure allow, 0
go
reconfigure with override
go

Return to top
-------------------------------------------------------------------------------

1.2.4: Why not max out all my columns?

-------------------------------------------------------------------------------

People occasionally ask the following valid question:


Suppose I have varying lengths of character strings none of which should
exceed 50 characters.

Is there any advantage of last_name varchar(50) over this last_name varchar
(255)?

That is, for simplicity, can I just define all my varying strings to be
varchar(255) without even thinking about how long they may actually be? Is
there any storage or performance penalty for this.

There is no performance penalty by doing this but as another netter pointed
out:


If you want to define indexes on these fields, then you should specify the
smallest size because the sum of the maximal lengths of the fields in the
index can't be greater than 256 bytes.

and someone else wrote in saying:


Your data structures should match the business requirements. This way the
data structure themselves becomes a data dictionary for others to model
their applications (report generation and the like).

Return to top
-------------------------------------------------------------------------------

1.2.5: What's a good example of a transaction?

-------------------------------------------------------------------------------


This answer is geared for Online Transaction Processing (OTLP)
applications.

To gain maximum throughput all your transactions should be in stored procedures
- see Q1.5.8. The transactions within each stored procedure should be short and
simple. All validation should be done outside of the transaction and only the
modification to the database should be done within the transaction. Also, don't
forget to name the transaction for sp_whodo - see Q9.2.

The following is an example of a good transaction:
/* perform validation */
select ...
if ... /* error */
/* give error message */
else /* proceed */
begin
begin transaction acct_addition
update ...
insert ...
commit transaction acct_addition
end

The following is an example of a bad transaction:
begin transaction poor_us
update X ...
select ...
if ... /* error */
/* give error message */
else /* proceed */
begin
update ...
insert ...
end
commit transaction poor_us

This is bad because:

*the first update on table X is held throughout the transaction. The idea
with OLTP is to get in and out fast.
*If an error message is presented to the end user and we await their
response, we'll maintain the lock on table X until the user presses return.
If the user is out in the can we can wait for hours.

Return to top
-------------------------------------------------------------------------------

1.2.6: What's a natural key?

-------------------------------------------------------------------------------

Let me think back to my database class... okay, I can't think that far so I'll
paraphrase... essentially, a natural key is a key for a given table that
uniquely identifies the row. It's natural in the sense that it follows the
business or real world need.

For example, assume that social security numbers are unique (I believe it is
strived to be unique but it's not always the case), then if you had the
following employee table:
employee:

ssn char(09)
f_name char(20)
l_name char(20)
title char(03)

Then a natural key would be ssn. If the combination of _name and l_name were
unique at this company, then another natural key would be f_name, l_name. As a
matter of fact, you can have many natural keys in a given table but in practice
what one does is build a surrogate (or artificial) key.

The surrogate key is guaranteed to be unique because (wait, get back, here it
goes again) it's typically a monotonically increasing value. Okay, my
mathematician wife would be proud of me... really all it means is that the key
is increasing linearly: i+1

The reason one uses a surrogate key is because your joins will be faster.

If we extended our employee table to have a surrogate key:
employee:

id identity
ssn char(09)
f_name char(20)
l_name char(20)
title char(03)

Then instead of doing the following:
where a.f_name = b.f_name
and a.l_name = a.l_name

we'd do this:
where a.id = b.id

We can build indexes on these keys and since Sybase's atomic storage unit is
2K, we can stash more values per 2K page with smaller indexes thus giving us
better performance (imagine the key being 40 bytes versus being say 4 bytes...
how many 40 byte values can you stash in a 2K page versus a 4 byte value? --
and how much wood could a wood chuck chuck, if a wood chuck could chuck wood?)

Does it have anything to do with natural joins?

Um, not really... from "A Guide to Sybase..", McGovern and Date, p. 112:


The equi-join by definition must produce a result containing two identical
columns. If one of those two columns is eliminated, what is left is called
the natural join.

Return to top
-------------------------------------------------------------------------------

1.2.7: Making a Stored Procedure invisible

-------------------------------------------------------------------------------

System 11.5 and above

It is now possible to encrypt your stored procedure code that is stored in the
syscomments table. This is preferred than the old method of deleting the data
as deleting will impact future upgrades. You can encrypt the text with the
sp_hidetext system procedure.

Pre-System 11.5

Perhaps you are trying to prevent the buyer of your software from defncopy'ing
all your stored procedures. It is perfectly safe to delete the syscomments
entries of any stored procedures you'd like to protect:
sp_configure "allow updates", 1
go
reconfigure with override /* System 10 and below */
go
use affected_database
go
delete syscomments where id = object_id("procedure_name")
go
use master
go
sp_configure "allow updates", 0
go

I believe in future releases of Sybase we'll be able to see the SQL that is
being executed. I don't know if that would be simply the stored procedure name
or the SQL itself.

Return to top
-------------------------------------------------------------------------------

1.2.8: Saving space when inserting rows monotonically

-------------------------------------------------------------------------------

If the columns that comprise the clustered index are monotonically increasing
(that is, new row key values are greater than those previously inserted) the
following System 11 dbcc tune will not split the page when it's half way full.
Rather it'll let the page fill and then allocate another page:
dbcc tune(ascinserts, 1, "my_table")

By the way, SyBooks is wrong when it states that the above needs to be reset
when the SQL Server is rebooted. This is a permanent setting.

To undo it:
dbcc tune(ascinserts, 0, "my_table")

Return to top
-------------------------------------------------------------------------------

1.2.9: How to compute database fragmentation

-------------------------------------------------------------------------------

Command

dbcc traceon(3604)
go
dbcc tab(production, my_table, 0)
go

Interpretation

A delta of one means the next page is on the same track, two is a short seek,
three is a long seek. You can play with these constants but they aren't that
important.

A table I thought was unfragmented had L1 = 1.2 L2 = 1.8

A table I thought was fragmented had L1 = 2.4 L2 = 6.6

How to Fix

You fix a fragmented table with clustered index by dropping and creating the
index. This measurement isn't the correct one for tables without clustered
indexes. If your table doesn't have a clustered index, create a dummy one and
drop it.

Return to top
-------------------------------------------------------------------------------

1.2.10: Tasks a DBA should do...

-------------------------------------------------------------------------------

A good presentation of a DBA's duties has been made available by Jeff Garbus (
je...@soaringeagleltd.com) of Soaring Eagle Consulting Ltd (http://
www.soaringeagleltd.com) and numerous books can be found here. These are
Powerpoint slides converted to web pages and so may be difficult to view with a
text browser!

An alternative view is catalogued below. (OK, so this list is crying out for a
bit of a revamp since checkstorage came along Ed!)
DBA Tasks
+------------------------+---------------+--------------------------------+
+------------------------+---------------+--------------------------------+
| Task | Reason | Period |
+------------------------+---------------+--------------------------------+
| dbcc checkdb, | I consider | If your SQL Server permits, |
| checkcatalog, | these the | daily before your database |
| checkalloc | minimal | dumps. If this is not possible |
| | dbcc's to | due to the size of your |
| | ensure the | databases, then try the |
| | integrity of | different options so that the |
| | your database | end of, say, a week, you've |
| | | run them all. |
+------------------------+---------------+--------------------------------+
| Disaster recovery | Always be | |
| scripts - scripts to | prepared for | |
| rebuild your SQL | the worst. | |
| Server in case of | Make sure to | |
| hardware failure | test them. | |
+------------------------+---------------+--------------------------------+
| scripts to logically | You can | Daily |
| dump your master | selectively | |
| database, that is bcp | rebuild your | |
| the critical system | database in | |
| tables: sysdatabases, | case of | |
| sysdevices, syslogins, | hardware | |
| sysservers, sysusers, | failure | |
| syssegments, | | |
| sysremotelogins | | |
+------------------------+---------------+--------------------------------+
| %ls -la disk_devices | A system | After any change as well as |
| | upgrade is | daily |
| | known to | |
| | change the | |
| | permissions. | |
+------------------------+---------------+--------------------------------+
| dump the user | CYA* | Daily |
| databases | | |
+------------------------+---------------+--------------------------------+
| dump the transaction | CYA | Daily |
| logs | | |
+------------------------+---------------+--------------------------------+
| dump the master | CYA | After any change as well as |
| database | | daily |
+------------------------+---------------+--------------------------------+
| System 11 and beyond - | This is the | After any change as well as |
| save the $DSQUERY.cfg | configuration | daily |
| to tape | that you've | |
| | dialed in, | |
| | why redo the | |
| | work? | |
+------------------------+---------------+--------------------------------+
| update statistics on | To ensure the | Depending on how often your |
| frequently changed | performance | major tables change. Some |
| tables and | of your SQL | tables are pretty much static |
| sp_recompile | Server | (e.g. lookup tables) so they |
| | | don't need an update |
| | | statistics, other tables |
| | | suffer severe trauma (e.g. |
| | | massive updates/deletes/ |
| | | inserts) so an update stats |
| | | needs to be run either nightly |
| | | /weekly/monthly. This should |
| | | be done using cronjobs. |
+------------------------+---------------+--------------------------------+
| create a dummy SQL | See disaster | When time permits |
| Server and do bad | recovery! | |
| things to it: delete | | |
| devices, destroy | | |
| permissions... | | |
+------------------------+---------------+--------------------------------+
| Talk to the | It's better | As time permits. |
| application | to work with | |
| developers. | them than | |
| | against them. | |
+------------------------+---------------+--------------------------------+
| Learn new tools | So you can | As time permits. |
| | sleep! | |
+------------------------+---------------+--------------------------------+
| Read c.d.s | Passes the | Priority One! |
| | time. | |
+------------------------+---------------+--------------------------------+

* Cover Your Ass

Return to top
-------------------------------------------------------------------------------

1.2.11: How to implement database security

-------------------------------------------------------------------------------

This is a brief run-down of the features and ideas you can use to implement
database security:

Logins, Roles, Users, Aliases and Groups

*sp_addlogin - Creating a login adds a basic authorisation for an account
- a username and password - to connect to the server. By default, no access
is granted to any individual databases.
*sp_adduser - A user is the addition of an account to a specific database.
*sp_addalias - An alias is a method of allowing an account to use a
specific database by impersonating an existing database user or owner.
*sp_addgroup - Groups are collections of users at the database level.
Users can be added to groups via the sp_adduser command.

A user can belong to only one group - a serious limitation that Sybase
might be addressing soon according to the ISUG enhancements requests.
Permissions on objects can be granted or revoked to or from users or
groups.

*sp_role - A role is a high-level Sybase authorisation to act in a
specific capacity for administration purposes. Refer to the Sybase
documentation for details.

Recommendations

Make sure there is a unique login account for each physical person and/or
process that uses the server. Creating generic logins used by many people or
processes is a bad idea - there is a loss of accountability and it makes it
difficult to track which particular person is causing server problems when
looking at the output of sp_who. Note that the output of sp_who gives a
hostname - properly coded applications will set this value to something
meaningful (ie. the machine name the client application is running from) so you
can see where users are running their programs. Note also that if you look at
master..sysprocesses rather than just sp_who, there is also a program_name.
Again, properly coded applications will set this (eg. to 'isql') so you can see
which application is running. If you're coding your own client applications,
make sure you set hostname and program_name via the appropriate Open Client
calls. One imaginative use I've seen of the program_name setting is to
incorporate the connection time into the name, eg APPNAME-DDHHMM (you have 16
characters to play with), as there's no method of determining this otherwise.

Set up groups, and add your users to them. It is much easier to manage an
object permissions system in this way. If all your permissions are set to
groups, then adding a user to the group ensures that users automatically
inherit the correct permissions - administration is *much* simpler.

Objects and Permissions

Access to database objects is defined by granting and/or revoking various
access rights to and from users or groups. Refer to the Sybase documentation
for details.

Recommendations

The ideal setup has all database objects being owned by the dbo, meaning no
ordinary users have any default access at all. Specific permissions users
require to access the database are granted explicitly. As mentioned above - set
permissions for objects to a group and add users to that group. Any new user
added to the database via the group then automatically obtains the correct set
of permissions.

Preferably, no access is granted at all to data tables, and all read and write
activity is accomplished through stored procedures that users have execute
permission on. The benefit of this from a security point of view is that access
can be rigidly controlled with reference to the data being manipulated, user
clearance levels, time of day, and anything else that can be programmed via
T-SQL. The other benefits of using stored procedures are well known (see Q1.5.8
). Obviously whether you can implement this depends on the nature of your
application, but the vast majority of in-house-developed applications can rely
solely on stored procedures to carry out all the work necessary. The only
server-side restriction on this method is the current inability of stored
procedures to adequately handle text and image datatypes (see Q1.5.12). To get
around this views can be created that expose only the necessary columns to
direct read or write access.

Views

Views can be a useful general security feature. Where stored procedures are
inappropriate views can be used to control access to tables to a lesser extent.
They also have a role in defining row-level security - eg. the underlying table
can have a security status column joined to a user authorisation level table in
the view so that users can only see data they are cleared for. Obviously they
can also be used to implement column-level security by screening out sensitive
columns from a table.

Triggers

Triggers can be used to implement further levels of security - they could be
viewed as a last line of defence in being able to rollback unauthorised write
activity (they cannot be used to implement any read security). However, there
is a strong argument that triggers should be restricted to doing what they were
designed for - implementing referential integrity - rather being loaded up with
application logic.

Administrative Roles

With Sybase version 10 came the ability to grant certain administrative roles
to user accounts. Accounts can have sa-level privilege, or be restricted to
security or operator roles - see sp_role.

Recommendations

The use of any generic account is not a good idea. If more than one person
requires access as sa to a server, then it is more accountable and traceable if
they each have an individual account with sa_role granted.

Return to top
-------------------------------------------------------------------------------

1.2.12: How to Shrink a Database

-------------------------------------------------------------------------------


Warning: This document has not been reviewed. Treat it as alpha-test
quality information and report any problems and suggestions to
br...@sybase.com

It has historically been difficult to shrink any database except tempdb
(because it is created fresh every boot time). The two methods commonly used
have been:

1. Ensure that you have scripts for all your objects (some tools like SA
Companion, DB Artisan or dbschema.pl from Sybperl can create scripts from
an existing database), then bcp out your data, drop the database, recreate
it smaller, run your scripts, and bcp in your data.
2. Use a third-party tool such as DataTool's SQL Backtrack, which in essence
automates the first process.

This technote outlines a third possibility that can work in most cases.

An Unsupported Method to Shrink a Database

This process is fairly trivial in some cases, such as removing a recently added
fragment or trimming a database that has a log fragment as its final
allocation, but can also be much more complicated or time consuming than the
script and bcp method.

General Outline

The general outline of how to do it is:

1. Make a backup of the current database
2. Migrate data from sysusages fragments with high lstart values to fragments
with low lstart values.
3. Edit sysusages to remove high lstart fragments that no longer have data
allocations.
4. Reboot sql server.

Details

1. Dump your database. If anything goes wrong, you will need to recover from
this backup!
2. Decide how many megabytes of space you wish to remove from your database.
3. Examine sysusages for the database. You will be shrinking the database by
removing the fragments with the highest lstart values. If the current
fragments are not of appropriate sizes, you may need to drop the database,
recreate it so there are more fragments, and reload the dump.

A trivial case: An example of a time when you can easily shrink a
database is if you have just altered it and are sure there has been no
activity on the new fragment. In this case, you can directly delete the
last row in sysusages for the db (this row was just added by alter db)
and reboot the server and it should come up cleanly.

4. Change the segmaps of the fragments you plan to remove to 0. This will
prevent future data allocations to these fragments.

Note: If any of the fragments you are using have user defined segments
on them, drop those segments before doing this.
sp_configure "allow updates", 1
go
reconfigure with override -- not necessary in System 11
go
update sysusages
set segmap = 0
where dbid = <dbid>
and lstart = <lstart>
go
dbcc dbrepair(<dbname>, remap)
go

Ensure that there is at least one data (segmap 3) and one log (segmap 4)
fragment, or one mixed (segmap 7) fragment.

If the server has been in use for some time, you can shrink it by deleting
rows from sysusages for the db, last rows first, after making sure that no
objects have any allocations on the usages.

5. Determine which objects are on the fragments you plan to remove.
traceon(3604)
go
dbcc usedextents( dbid,0,0,1)
go

Find the extent with the same value as the lstart of the first fragment you
plan to drop. You need to migrate every object appearing from this point on
in the output.

6. Migrate these objects onto earlier fragments in the database.

Objids other than 0 or 99 are objects that you must migrate or drop. You
can migrate a user table by building a new clustered index on the table
(since the segmap was changed, the new allocations will not go on this
fragment).

You can migrate some system tables (but not all) using the sp_fixindex
command to rebuild its clustered index. However, there are a few system
tables that cannot have their clustered indexes rebuilt, and if they have
any allocations on the usage, you are out of luck.

If the objid is 8, then it is the log. You can migrate the log by ensuring
that another usage has a log segment (segmap 4 or 7). Do enough activity on
the database to fill an extents worth of log pages, then checkpoint and
dump tran.

Once you have moved all the objects, delete the row from sysusages and
reboot the server.

Run dbcc checkdb and dbcc checkalloc on the database to be sure you are ok,
then dump the database again.

Return to top
-------------------------------------------------------------------------------

1.2.13: How do I audit the SQL sent to the server?

-------------------------------------------------------------------------------

This does not seem to be well documented, so here is a quick means of auditing
the SQL text that is sent to the server. Note that this simply audits the SQL
sent to the server. So, if your user process executes a big stored procedure,
all you will see here is a call to the stored procedure. None of the SQL that
is executed as part of the stored procedure will be listed.

Firstly, you need to have installed Sybase security (which involves installing
the sybsecurity database and loading it using the script $SYBASE/scripts/
installsecurity). Read the Sybase Security Administration Manual, you may
want to enable a threshold procedure to toggle between a couple of audit
tables. Be warned, that the default configuration option "suspend auditing
when device full" is set to 1. This means that the server will suspend all
normal SQL operations if the audit database becomes full and the sso logs in
and gets rid of some data. You might want to consider changing this to 0
unless yours is a particularly sensitive installation.

Once that is done, you need to enable auditing. If you haven't already, you
will need to restart ASE in order to start the audit subsystem. Then comes the
bit that does not seem well documented, you need to select an appropriate audit
option, and the one for the SQL text is "cmdtext". From the sybsecurity
database, issue

sp_audit "cmdtext",<username>,"all","on"

for each user on the system that wish to collect the SQL for. sp_audit seems
to imply that you can replace "<username>" with all, but I get the error
message "'all' is not a valid user name". Finally, enable auditing for the
system as a whole using

sp_configure "auditing",1
go

If someone knows where in the manuals this is well documented, I will add a
link/reference.

Note: The stored procedure sp_audit had a different name under previous
releases. I think that it was called sp_auditoption. Also, to get a full list
of the options and their names, go into sybsecurity and simply run sp_audit
with no arguments.

Return to top
-------------------------------------------------------------------------------

next prev ASE FAQ

David Owen

unread,
Jan 24, 2001, 5:14:14 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part4

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


Sybase Frequently Asked Questions

Sybase FAQ Home Page Adaptive Server Enterprise FAQ Adaptive Server Anywhere
FAQ Replication Server FAQ Search the FAQ Sybase FAQ

Adaptive Server Enterprise



0. What's in a name?
1. ASE Administration
1.1 Basic Administration
1.2 User Database Administration
1.3 Advanced Administration
1.4 General Troubleshooting
1.5 Performance and Tuning
2. Platform Specific Issues
3. DBCC's
4. isql
5. bcp
6. SQL Development
6.1 SQL Fundamentals
6.2 SQL Advanced
7. Open Client
9. Freeware
10.Sybase Technical News
11.Additional Information
12.Miscellany


-------------------------------------------------------------------------------

What's in a name?

Throughout this FAQ you will find references to SQL Server and, starting with
this release, ASE or Adaptive Server Enterprise to give it its full name. You
might also be a little further confused, since Microsoft also seem to have a
product called SQL Server.

Well, back at about release 4.2 of Sybase SQL Server, the products were exactly
the same. Microsoft were to do the port to NT. Well, it is pretty well
documented, but there was a falling out. Both companies kept the same name for
their data servers and confusion began to reign. In an attempt to try and sort
this out, Sybase renamed their product Adaptive Server Enterprise (ASE)
starting with version 11.5.

I found this quote in a Sybase manual the other day:

Since changing the name of Sybase SQL Server to Adaptive Server Enterprise,
Sybase uses the names Adaptive Server and Adaptive Server Enterprise to refer
collectively to all supported versions of the Sybase SQL Server and Adaptive
Server Enterprise. Version-specific references to Adaptive Server or SQL Server
include version numbers.

I will endeavour to try and do the same within the FAQ, but the job is far from
complete!

Back to Top

Basic ASE Administration



1.1.1 What is SQL Server and ASE anyway?
1.1.2 How do I start/stop SQL Server when the CPU reboots?
1.1.3 How do I move tempdb off of the master device?
1.1.4 How do I correct timeslice -201?
1.1.5 The how's and why's on becoming a Certified Sybase
Professional DBA (CSPDBA)?
1.1.6 RAID and Sybase
1.1.7 How to swap a db device with another
1.1.8 Server naming and renaming
1.1.9 How do I interpret the tli strings in the interface file?
1.1.10How can I tell the datetime my Server started?
1.1.11Raw partitions or regular files?
1.1.12Is Sybase Y2K (Y2000) compliant?
1.1.13How Can I Run the SQL Server Upgrade Manually?

next # ASE FAQ
-------------------------------------------------------------------------------

1.1.1: What is SQL Server and ASE?

-------------------------------------------------------------------------------

Overview

Before Sybase System 10 (as they call it) we had Sybase 4.x. Sybase System 10
has some significant improvements over Sybase 4.x product line. Namely:

*the ability to allocate more memory to the dataserver without degrading
its performance.
*the ability to have more than one database engine to take advantage of
multi-processor cpu machines.
*a minimally intrusive process to perform database and transaction dumps.

Background and More Terminology

A SQL Server is simply a Unix process. It is also known as the database engine.
It has multiple threads to handle asynchronous I/O and other tasks. The number
of threads spawned is the number of engines (more on this in a second) times
five. This is the current implementation of Sybase System 10, 10.0.1 and 10.0.2
on IRIX 5.3.

Each SQL dataserver allocates the following resources from a host machine:

*memory and
*raw partition space.

Each SQL dataserver can have up to 255 databases. In most implementations the
number of databases is limited to what seems reasonable based on the load on
the SQL dataserver. That is, it would be impractical to house all of a large
company's databases under one SQL dataserver because the SQL dataserver (a Unix
process) will become overloaded.

That's where the DBA's experience comes in with interrogation of the user
community to determine how much activity is going to result on a given database
or databases and from that we determine whether to create a new SQL Server or
to house the new database under an existing SQL Server. We do make mistakes
(and businesses grow) and have to move databases from one SQL Server to
another. And at times SQL Servers need to move from one CPU server to another.

With Sybase System 10, each SQL Server can be configured to have more than one
engine (each engine is again a Unix process). There's one primary engine that
is the master engine and the rest of the engines are subordinates. They are
assigned tasks by the master.

Interprocess communication among all these engines is accomplished with shared
memory.


Some times when a DBA issues a Unix kill command to extinguish a maverick
SQL Server, the subordinate engines are forgotten. This leaves the shared
memory allocated and eventually we may get in to situations where swapping
occurs because this memory is locked. To find engines that belong to no
master SQL Server, simple look for engines owned by /etc/init (process id
1). These engines can be killed -- this is just FYI and is a DBA duty.

Before presenting an example of a SQL Server, some other topics should be
covered.

Connections

A SQL Server has connections to it. A connection can be viewed as a user login
but it's not necessarily so. That is, a client (a user) can spark up multiple
instances of their application and each client establishes its own connection
to the SQL dataserver. Some clients may require two or more per invocation. So
typically DBA's are only concerned with the number of connections because the
number of users typically does not provide sufficient information for us to do
our job.


Connections take up SQL Server resources, namely memory, leaving less
memory for the SQL Servers' available cache.

SQL Server Buffer Cache

In Sybase 4.0.1 there was a limit to the amount of memory that could be
allocated to a SQL Server. It was around 80MB, with 40MB being the typical max.
This was due to internal implementations of Sybase's data structures.

With Sybase System 10 there really is no limit. For instance, we have a SQL
Server cranked up to 300MB.

The memory in a SQL Server is primarily used to cache data pages from disk.
Consider that the SQL Server is a light weight Operating System: handling user
(connections), allocating memory to users, keeping track of which data pages
need to be flushed to disk and the sort. Very sophisticated and complex.
Obviously if a data page is found in memory it's much faster to retrieve than
going out to disk.

Each connection takes away a little bit from the available memory that is used
to cache disk pages. Upon startup, the SQL Server pre-allocates the memory that
is needed for each connection so it's not prudent to configure 500 connections
when only 300 are needed. We'd waste 200 connections and the memory associated
with that. On the other hand, it is also imprudent to under configure the
number of connections; users have a way of soaking up a resource (like a SQL
Server) and if users have all the connections a DBA cannot get into the server
to allocate more connections.

One of the neat things about a SQL Server is that it reaches (just like a Unix
process) a working set. That is, upon startup it'll do a lot of physical I/O's
to seed its cache, to get lookup information for typical transactions and the
like. So initially, the first users have heavy hits because their requests have
to be performed as a physical I/O. Subsequent transactions have less physical I
/O and more logical I/O's. Logical I/O is an I/O that is satisfied in the SQL
Servers' buffer cache. Obviously, this is the preferred condition.

DSS vs OLTP

We throw around terms like everyone is supposed to know this high tech lingo.
The problem is that they are two different animals that require a SQL Server to
be tuned accordingly for each.

Well, here's the low down.

DSS
Decision Support System
OLTP
Online Transaction Processing

What do these mean? OLTP applications are those that have very short orders of
work for each connection: fetch this row and with the results of it update one
or two other rows. Basically, small number of rows affected per transaction in
rapid sucession, with no significant wait times between operations in a
transaction.

DSS is the lumbering elephant in the database world (unless you do some
tricks... out of this scope). DSS requires a user to comb through gobs of data
to aggregate some values. So the transactions typically involve thousands of
rows. Big difference than OLTP.

We never want to have DSS and OLTP on the same SQL Server because the nature of
OLTP is to grab things quickly but the nature of DSS is to stick around for a
long time reading tons of information and summarizing the results.

What a DSS application does is flush out the SQL Server's data page cache
because of the tremendous amount of I/O's. This is obviously very bad for OTLP
applications because the small transactions are now hurt by this trauma. When
it was only OLTP a great percentage of I/O was logical (satisfied in the
cache); now transactions must perform physical I/O.

That's why it's important in Sybase not to mix DSS and OLTP, at least until
System 11 arrives.


Sybase System 11 release will allow for the mixing of OLTP and DSS by
allowing the DBA to partition (and name) the SQL Server's buffer cache and
assign it to different databases and/or objects. The idea is to allow DSS
to only affect their pool of memory and thus allowing OLTP to maintain its
working set of memory.

Asynchronous I/O

Why async I/O? The idea is that in a typical online transaction processing
(OLTP) application, you have many connections (over 200 connections) and short
transactions: get this row, update that row. These transactions are typically
spread across different tables of the databases. The SQL Server can then
perform each one of these asynchronously without having to wait for others to
finish. Hence the importance of having async I/O fixed on our platform.

Engines

Sybase System 10 can have more than one engine (as stated above). Sybase has
trace flags to pin the engines to a given CPU processor but we typically don't
do this. It appears that the master engine goes to processor 0 and subsequent
subordinates to the next processor.

Currently, Sybase does not scale linearly. That is, five engines do not make
Sybase perform five times as fast however we do max out with four engines.
After that performance starts to degrade. This is supposed to be fixed with
Sybase System 11.

Putting Everything Together

As previously mentioned, a SQL Server is a collection of databases with
connections (that are the users) to apply and retrieve information to and from
these containers of information (databases).

The SQL Server is built and its master device is typically built over a medium
sized (50MB) raw partition. The tempdb is built over a cooked (regular - as
opposed to a raw device) file system to realize any performance gains by
buffered writes. The databases themselves are built over the raw logical
devices to ensure their integrity.

Physical and Logical Devices

Sybase likes to live in its own little world. This shields the DBA from the
outside world known as Unix (or VMS). However, it needs to have a conduit to
the outside world and this is accomplished via devices.

All physical devices are mapped to logical devices. That is, given a physical
device (such as /lv1/dumps/tempdb_01.efs or /dev/rdsk/dks1ds0) it is mapped by
the DBA to a logical device. Depending on the type of the device, it is
allocated, by the DBA, to the appropriate place (vague enough?).

Okay, let's try and clear this up...

Dump Device

The DBA may decide to create a device for dumping the database nightly. The DBA
needs to create a dump device.

We'll call that logically in the database datadump_for_my_db but we'll map it
to the physical world as /lv1/dumps/in_your_eye.dat So the DBA will write a
script that connects to the SQL Server and issues a command like this:
dump database my_stinking_db to datadump_for_my_db
go

and the backupserver (out of this scope) takes the contents of my_stinking_db
and writes it out to the disk file /lv1/dumps/in_your_eye.dat

That's a dump device. The thing is that it's not preallocated. This special
device is simply a window to the operating system.

Data and Log Devices

Ah, now we are getting into the world of pre-allocation. Databases are built
over raw partitions. The reason for this is because Sybase needs to be
guaranteed that all its writes complete successfully. Otherwise, if it posted
to a file system buffer (as in a cooked file system) and the machine crashed,
as far as Sybase is concerned the write was committed. It was not, however, and
integrity of the database was lost. That is why Sybase needs raw partitions.
But back to the matter at hand...

When building a new SQL Server, the DBA determines how much space they'll need
for all the databases that will be housed in this SQL Server.

Each production database is composed of data and log.

The data is where the actual information resides. The log is where the changes
are kept. That is, every row that is updated/deleted/inserted gets placed into
the log portion then applied to the data portion of the database.


That's why DBA strives to place the raw devices for logs on separate disks
because everything has to single thread through the log.

A transaction is a collection of SQL statements (insert/delete/update) that are
grouped together to form a single unit of work. Typically they map very closely
to the business.

I'll quote the Sybase SQL Server System Administration guide on the role of the
log:


The transaction log is a write-ahead log. When a user issues a statement
that would modify the database, SQL Server automatically writes the changes
to the log. After all changes for a statement have been recorded in the
log, they are written to an in-cache copy of the data page. The data page
remains in cache until the memory is needed for another database page. At
that time, it is written to disk. If any statement in a transaction fails
to complete, SQL Server reverses all changes made by the transaction. SQL
Server writes an "end transaction" record to the log at the end of each
transaction, recording the status (success or failure) of the transaction

As such, the log will grow as user connections affect changes to the database.
The need arises to then clear out the log of all transactions that have been
flushed to disk. This is performed by issuing the following command:
dump transaction my_stinking_db to logdump_for_my_db
go

The SQL Server will write to the dumpdevice all transactions that have been
committed to disk and will delete the entries from its copy, thus freeing up
space in the log. Dumping of the transaction logs is accomplished via cron (the
Unix scheduler, NT users would have to resort to at or some third party tool) .
We schedule the heavily hit databases every 20 minutes during peak times.


A single user can fill up the log by having begin transaction with no
corresponding commit/rollback transaction. This is because all their
changes are being applied to the log as an open-ended transaction, which is
never closed. This open-ended transaction cannot be flushed from the log,
and therefore grows until it occupies all of the free space on the log
device.

And the way we dump it is with a dump device. :-)

An Example

If the DBA has four databases to plop on this SQL Server and they need a total
of 800MB of data and 100MB of log (because that's what really matters to us),
then they'd probably do something like this:

1. allocate sufficient raw devices to cover the data portion of all the
databases
2. allocate sufficient raw devices to cover the log portion of all the
databases
3. start allocating the databases to the devices.

For example, assuming the following database requirements:
Database Requirements
+----+------+-----+
+----+------+-----+
| DB | Data | Log |
+----+------+-----+
| a | 300 | 30 |
+----+------+-----+
| b | 400 | 40 |
+----+------+-----+
| c | 100 | 10 |
+----+------+-----+

and the following devices:
Devices
+---------------+----------+------+
+---------------+----------+------+
| Logical | Physical | Size |
+---------------+----------+------+
| dks3d1s2_data | /dev/ | 500 |
| | rdsk/ | |
| | dks3d1s2 | |
+---------------+----------+------+
| dks4d1s2_data | /dev/ | 500 |
| | rdsk/ | |
| | dks4d1s2 | |
+---------------+----------+------+
| dks5d1s0_log | /dev/ | 200 |
| | rdsk/ | |
| | dks5d1s0 | |
+---------------+----------+------+

then the DBA may elect to create the databases as follows:


create database a on dks3d1s2_data = 300 log on dks5d1s0_log = 30
create database b on dks4d1s2_data = 400 log on dks5d1s0_log = 40
create database c on dks3d1s2_data = 50, dks4d1s2_data = 50 log on
dks5d1s0_log = 10

Some of the devices will have extra space available because out database
allocations didn't use up all the space. That's fine because it can be used for
future growth. While the Sybase SQL Server is running, no other Sybase SQL
Server can re-allocate these physical devices.

TempDB

TempDB is simply a scratch pad database. It gets recreated when a SQL Server is
rebooted. The information held in this database is temporary data. A query may
build a temporary table to assist it; the Sybase optimizer may decide to create
a temporary table to assist itself.

Since this is an area of constant activity we create this database over a
cooked file system which has historically proven to have better performance
than raw - due to the buffered writes provided by the Operating System.

Port Numbers

When creating a new SQL Server, we allocate a port to it (currently, DBA
reserves ports 1500 through 1899 for its use). We then map a host name to the
different ports: hera, fddi-hera and so forth. We can actually have more than
one port number for a SQL Server but we typically don't do this.

Back to top
-------------------------------------------------------------------------------

1.1.2: How to start/stop SQL Server when CPU reboots

-------------------------------------------------------------------------------

Below is an example of the various files (on Irix) that are needed to start/
stop a SQL Server. The information can easily be extended to any UNIX platform.

The idea is to allow as much flexibility to the two classes of administrators
who manage the machine:

*The System Administrator
*The Database Administrator

Any errors introduced by the DBA will not interfere with the System
Administrator's job.

With that in mind we have the system startup/shutdown file /etc/init.d/sybase
invoking a script defined by the DBA: /usr/sybase/sys.config/
{start,stop}.sybase

/etc/init.d/sybase

On some operating systems this file must be linked to a corresponding entry in
/etc/rc.0 and /etc/rc.2 -- see rc0(1M) and rc2(1M)
#!/bin/sh
# last modified: 10/17/95, sr.
#
# Make symbolic links so this file will be called during system stop/start.
# ln -s /etc/init.d/sybase /etc/rc0.d/K19sybase
# ln -s /etc/init.d/sybase /etc/rc2.d/S99sybase
# chkconfig -f sybase on

# Sybase System-wide configuration files
CONFIG=/usr/sybase/sys.config

if $IS_ON verbose ; then # For a verbose startup and shutdown
ECHO=echo
VERBOSE=-v
else # For a quiet startup and shutdown
ECHO=:
VERBOSE=
fi

case "$1" in
'start')
if $IS_ON sybase; then
if [ -x $CONFIG/start.sybase ]; then
$ECHO "starting Sybase servers"
/bin/su - sybase -c "$CONFIG/start.sybase $VERBOSE &"
else
<error condition>
fi
fi
;;

'stop')
if $IS_ON sybase; then
if [ -x $CONFIG/stop.sybase ]; then
$ECHO "stopping Sybase servers"
/bin/su - sybase -c "$CONFIG/stop.sybase $VERBOSE &"
else
<error condition>
fi
fi
;;

*)
echo "usage: $0 {start|stop}"
;;
esac

/usr/sybase/sys.config/{start,stop}.sybase

start.sybase

#!/bin/sh -a
#
# Script to start sybase
#
# NOTE: different versions of sybase exist under /usr/sybase/{version}
#
# Determine if we need to spew our output
if [ "$1" != "spew" ] ; then
OUTPUT=">/dev/null 2>&1"
else
OUTPUT=""
fi
# 10.0.2 servers
HOME=/usr/sybase/10.0.2
cd $HOME
# Start the backup server
eval install/startserver -f install/RUN_BU_KEPLER_1002_52_01 $OUTPUT
# Start the dataservers
# Wait two seconds between starts to minimize trauma to CPU server
eval install/startserver -f install/RUN_FAC_WWOPR $OUTPUT
sleep 2
eval install/startserver -f install/RUN_MAG_LOAD $OUTPUT
exit 0

stop.sybase

#!/bin/sh
#
# Script to stop sybase
#
# Determine if we need to spew our output
if [ -z "$1" ] ; then
OUTPUT=">/dev/null 2>&1"
else
OUTPUT="-v"
fi
eval killall -15 $OUTPUT dataserver backupserver sybmultbuf
sleep 2
# if they didn't die, kill 'em now...
eval killall -9 $OUTPUT dataserver backupserver sybmultbuf
exit 0

If your platform doesn't support killall, it can easily be simulated as
follows:
#!/bin/sh
#
# Simple killall simulation...
# $1 = signal
# $2 = process_name
#
#
# no error checking but assume first parameter is signal...
# what ya want for free? :-)
#
kill -$1 `ps -ef | fgrep $2 | fgrep -v fgrep | awk '{ print $1 }'`

Back to top
-------------------------------------------------------------------------------

1.1.3: How do I move tempdb off of the Master Device?

-------------------------------------------------------------------------------


Note: I received a message from Sybase TS recommending that the FAQ no
longer advocate the physical removal of entries from the sysusages/
sysdatabases tables. It makes recovery extremely painful.

After reviewing their write-up I agree.

A quick alternative - Sybase TS Preferred Method

This is the Sybase TS method of removing most activity from the master device:

1. Alter tempdb on another device:
1> alter database tempdb on ...
2> go

2. Use the tempdb:
1> use tempdb
2> go

3. Drop the segments:
1> sp_dropsegment "default", tempdb, master
2> go
1> sp_dropsegment "logsegment", tempdb, master
2> go
1> sp_dropsegment "system", tempdb, master
2> go


Note that there is still some activity on the master device. On a three
connection test that I ran:
while ( 1 = 1 )
begin
create table #x (col_a int)
drop table #x
end

there was one write per second. Not bad.

Yet another alternative

The idea of this handy script is to simply fill the first 2MB of tempdb thus
effectively blocking anyone else from using it. The slight gotcha with this
script, since we're using model, is that all subsequent database creates will
also have tempdb_filler installed. This is easily remedied by dropping the
table after creating a new database.

This script works because tempdb is rebuilt every time the SQL Server is
rebooted. Very nice trick!
/* this isql script creates a table in the model database. */
/* Since tempdb is created from the model database when the */
/* server is started, this effectively moves the active */
/* portion of tempdb off of the master device. */

use model
go

/* note: 2k row size */
create table tempdb_filler(
a char(255) not null,
b char(255) not null,
c char(255) not null,
d char(255) not null,
e char(255) not null
)
go

/* insert 1024 rows */
declare @i int
select @i = 1
while (@i <= 1024)
begin
insert into tempdb_filler values('a','b','c','d','e')
if (@i % 100 = 0) /* dump the transaction every 100 rows */
dump tran model with truncate_only
select @i=@i+1
end
go

Back to top
-------------------------------------------------------------------------------

1.1.4: How do I correct timeslice -201

-------------------------------------------------------------------------------

(Note, this procedure is only really necessary with pre-11.x systems. In
system 11 systems, these parameters are tunable using sp_configure.)

Why Increase It?

Basically, it will allow a task to be scheduled onto the CPU for a longer time.
Each task on the system is scheduled onto the CPU for a fixed period of time,
called the timeslice, during which it does some work, which is resumed when its
next turn comes around.

The process has up until the value of ctimemax (a config block variable) to
finish its task. As the task is working away, the scheduler counts down
ctimemax units. When it gets to the value of ctimemax - 1, if it gets stuck and
for some reason cannot be taken off the CPU, then a timeslice error gets
generated and the process gets infected.

On the other hand, SQL Server will allow a Server process to run as long as it
needs to. It will not swap the process out for another process to run. The
process will decide when it is "done" with the Server CPU. If, however, a
process goes on and on and never relinquishes the Server CPU, then Server will
timeslice the process.

Potential Fix

1. Shutdown the SQL Server
2. %buildmaster -dyour_device -yctimemax=2000
3. Restart your SQL Server. If the problem persists contact Sybase Technical
Support notifying them what you have done already.

Back to top
-------------------------------------------------------------------------------

1.1.5: Certified Sybase Professional

-------------------------------------------------------------------------------

There have been changes in the process of becoming a Sybase Certified
Professional. There's a very informative link at The Sybase Learning Connection
.

Rob Verschoor has put together some good stuff on his pages (http://
www.euronet.nl/~syp_rob/certtips.html) that have pretty much all that you need
to know.

Sybase have released some sample questions (look for them at http://
slc.sybase.com) but the GUI that they run under is a little flaky. The topmost
option on some of the questions seems to be unavailable. If you need
check-mark 1 in order to answer the question, click in the area where you
expect the checkbox to be and it magically appears.
-------------------------------------------------------------------------------

1.1.6: RAID and Sybase

-------------------------------------------------------------------------------

Here's a short summary of what you need to know about Sybase and RAID.

The newsgroup comp.arch.storage has a detailed FAQ on RAID, but here are a few
definitions:

RAID

RAID means several things at once. It provides increased performance through
disk striping, and/or resistance to hardware failure through either mirroring
(fast) or parity (slower but cheaper).

RAID 0

RAID 0 is just striping. It allows you to read and write quickly, but provides
no protection against failure.

RAID 1

RAID 1 is just mirroring. It protects you against failure, and generally reads
and writes as fast as a normal disk. It uses twice as many disks as normal (and
sends twice as much data across your SCSI bus, but most machines have plenty of
extra capacity on their SCSI busses.)


Sybase mirroring always reads from the primary copy, so it does not
increase read performance.

RAID 0+1

RAID 0+1 (also called RAID 10) is striping and mirroring together. This gives
you the highest read and write performance of any of the raid options, but uses
twice as many disks as normal.

RAID 4/RAID 5

RAID 4 and 5 have disk striping and use 1 extra disk to provide parity. Various
vendors have various optimizations, but this RAID level is generally much
slower at writes than any other kind of RAID.

RAID 7

I am not sure if this is a genuine RAID standard, further checking on your part
is required.

Details

Most hardware RAID controllers also provide a battery-backed RAM cache for
writing. This is very useful, because it allows the disk to claim that the
write succeeded before it has done anything. If there is a power failure, the
information will (hopefully) be written to disk when the power is restored. The
cache is very important because database log writes cause the process doing the
writes to stop until the write is successful. Systems with write caching thus
complete transactions much more quickly than systems without.

What RAID levels should my data, log, etc be on? Well, the log disk is
frequently written, so it should not be on RAID 4 or 5. If your data is
infrequently written, you could use RAID 4 or 5 for it, because you don't mind
that writes are slow. If your data is frequently written, you should use RAID
0+1 for it. Striping your data is a very effective way of avoiding any one disk
becoming a hot-spot. Traditionally Sybase databases were divided among devices
by a human attempting to determine where the hot-spots are. Striping does this
in a straight-forward fashion, and also continues to work if your data access
patterns change.

Your tempdb is data but it is frequently written, so it should not be on RAID 4
or 5.

If your RAID controller does not allow you to create several different kinds of
RAID volumes on it, then your only hope is to create a huge RAID 0+1 set. If
your RAID controller does not support RAID 0+1, you shouldn't be using it for
database work.

Back to top
-------------------------------------------------------------------------------

1.1.7: How to swap a db device with another

-------------------------------------------------------------------------------

Here are four approaches. Before attempting any of the following: Backup,
Backup, Backup.

Dump and Restore

1. Backup the databases on the device, drop the databases, drop the devices.
2. Rebuild the new devices.
3. Rebuild the databases (Make sure you recreate the fragments correctly - See
Ed Barlow's scripts (http://www.tiac.net/users/sqltech/) for an sp that
helps you do this if you've lost your notes. Failure to do this will
possibly lead to data on log segments and log on data segments).
4. Reload the database dumps!

Twiddle the Data Dictionary - for brave experts only.

1. Shut down the server.
2. Do a physical dump (using dd(1), or such utility) of the device to be moved.
3. Load the dump to the new device
4. Edit the data dictionary (sysdevices.physname) to point to the new device.

The Mirror Trick

1. Create a mirror of the old device, on the new device.
2. Unmirror the primary device, thereby making the _backup_ the primary device.
3. Repeat this for all devices until the old disk is free.

dd (Unix only)

(This option is no use if you need to move a device now, rather if you
anticipate moving a device at some point in the future.)

You may want to use this approach for creating any database.

Create (or use) a directory for symbolic links to the devices you wish to use.
Then create your database, but instead of going to /dev/device, go to /
directory/symlink - When it comes time to move your devices, you shut down the
server, simply dd(1) the data from the old device to the new device, recreate
the symbolic links to the new device and restart the SQL Server. Simple as
that.

Backups are a requisite in all cases, just in case.

Back to top
-------------------------------------------------------------------------------

1.1.8: Server naming and renaming

-------------------------------------------------------------------------------

There are three totally separate places where SQL Server names reside, causing
much confusion.

SQL Server Host Machine interfaces File

A master entry in here for server TEST will provide the network information
that the server is expected to listen on. The -S parameter to the dataserver
executable tells the server which entry to look for, so in the RUN_TEST file,
-STEST will tell the dataserver to look for the entry under TEST in the
interfaces file and listen on any network parameters specified by 'master'
entries.
TEST
master tcp ether hpsrv1 1200
query tcp ether hpsrv1 1200


Note that preceding the master/query entries there's a tab.

This is as far as the name TEST is used. Without further configuration the
server does not know its name is TEST, nor do any client applications.
Typically there will also be query entries under TEST in the local interfaces
file, and client programs running on the same machine as the server will pick
this connection information up. However, there is nothing to stop the query
entry being duplicated under another name entirely in the same interfaces file.
ARTHUR
query tcp ether hpsrv1 1200

isql -STEST or isql -SARTHUR will connect to the same server. The name is
simply a search parameter into the interfaces file.

Client Machine interfaces File

Again, as the server name specified to the client is simply a search parameter
for Open Client into the interfaces file, SQL.INI or WIN.INI the name is
largely irrelevant. It is often set to something that means something to the
users, especially where they might have a choice of servers to connect to. Also
multiple query entries can be set to point to the same server, possibly using
different network protocols. eg. if TEST has the following master entries on
the host machine:
TEST
master tli spx /dev/nspx/ \xC12082580000000000012110
master tcp ether hpsrv1 1200

Then the client can have a meaningful name:
ACCOUNTS_TEST_SERVER
query tcp ether hpsrv1 1200

or alternative protocols:
TEST_IP
query tcp ether hpsrv1 1200
TEST_SPX
query tli spx /dev/nspx/ \xC12082580000000000012110

sysservers

This system table holds information about remote SQL Servers that you might
want to connect to, and also provides a method of naming the local server.

Entries are added using the sp_addserver system procedure - add a remote server
with this format:
sp_addserver server_name, null, network_name

server_name is any name you wish to refer to a remote server by, but
network_name must be the name of the remote server as referenced in the
interfaces file local to your local server. It normally makes sense to make the
server_name the same as the network_name, but you can easily do:
sp_addserver LIVE, null, ACCTS_LIVE

When you execute for example, exec LIVE.master..sp_helpdb the local SQL Server
will translate LIVE to ACCTS_LIVE and try and talk to ACCTS_LIVE via the
ACCTS_LIVE entry in the local interfaces file.

Finally, a variation on the sp_addserver command:
sp_addserver LOCALSRVNAME, local

names the local server (after a restart). This is the name the server reports
in the errorlog at startup, the value returned by @@SERVERNAME, and the value
placed in Open Client server messages. It can be completely different from the
names in RUN_SRVNAME or in local or remote interfaces - it has no bearing on
connectivity matters.

Back to top
-------------------------------------------------------------------------------

1.1.9: How do I interpret the tli strings in the interface file?

-------------------------------------------------------------------------------

The tli string contained with Solaris interface files is a hex string
containing port and IP address. If you have an entry

SYBSRVR
master tli tcp /dev/tcp \x000204018196c4510000000000000000

Then it can be interpreted as follows:


x0002 no user interpretation (header info?)
0401 port number (1025 decimal)
81 first part of IP address (129 decimal)
96 second part of IP address (150 decimal)
c4 third part of IP address (196 decimal)
51 fourth part of IP address (81 decimal)

So, the above tli address is equivalent to

SYBSRVR
master tcp ether sybhost 1025

where sybhost's IP address is 129.150.196.81.

The following piece of Sybperl (courtesy of Michael Peppler) takes a tli entry
and returns the IP address and port number for each server in a Solaris'
interfaces file.

#!/usr/local/bin/perl -w

use strict;

my $server;
my @dat;
my ($port, $ip);

while(<>) {
next if /^\s*$/;
next if /^\s*\#/;
chomp;
if(/^\w/) {
$server = $_;
$server =~ s/\s*$//;
next;
}

@dat = split(' ', $_);
($port, $ip) = parseAddress($dat[4]);
print "$server - $dat[0] on port $port, host $ip\n";
}

sub parseAddress {
my $addr = shift;

my $port;
my $ip;

my (@arr) = (hex(substr($addr, 10, 2)),
hex(substr($addr, 12, 2)),
hex(substr($addr, 14, 2)),
hex(substr($addr, 16, 2)));
$port = hex(substr($addr, 6, 4));
$ip = join('.', @arr);

($port, $ip);
}

Back to top
-------------------------------------------------------------------------------

1.1.10: How can I tell the datetime my Server started?

-------------------------------------------------------------------------------

Method #1

The normal way would be to look at the errorlog, but this is not always
convenient or even possible. From a SQL session you find out the server startup
time to within a few seconds using:
select "Server Start Time" = crdate
from master..sysdatabases
where name = "tempdb"

Method #2

Another useful query is:
select * from sysengines

which gives the address and port number at which the server is listening.

Back to top
-------------------------------------------------------------------------------

1.1.11: Raw partitions or regular files?

-------------------------------------------------------------------------------

Hmmm... as always, this answer depends on the vendor's implementation on a
cooked file system for the SQL Server...

Performance Hit (synchronous vs asynchronous)

If on this platform, the SQL Server performs file system I/O synchronously then
the SQL Server is blocked on the read/write and throughput is decreased
tremendously.

The way the SQL Server typically works is that it will issue an I/O (read/
write) and save the I/O control block and continue to do other work (on behalf
of other connections). It'll periodically poll the workq's (network, I/O) and
resume connections when their work has completed (I/O completed, network data
xmit'd...).

Performance Hit (bcopy issue)

Assuming that the file system I/O is asynchronous (this can be done on SGI), a
performance hit may be realized when bcopy'ing the data from kernel space to
user space.

Cooked I/O typically (again, SGI has something called directed I/O which allows
I/O to go directly to user space) has to go from disk, to kernel buffers and
from kernel buffers to user space; on a read. The extra layer with the kernel
buffers is inherently slow. The data is moved from kernel buffers to/from user
space using bcopy(). On small operations this typically isn't that much of an
issue but in a RDBMS scenario the bcopy() layer is a significant performance
hit because it's done so often...

Performance Gain!

It's true, using file systems, at times you can get performance gains assuming
that the ASE/SQL Server on your platform does the I/O asynchronously (although
there's a caveat on this too... I'll cover that later on).

If your machine has sufficient memory and extra CPU capacity, you can realize
some gains by having writes return immediately because they're posted to
memory. Reads will gain from the anticipatory fetch algorithm employed by most
O/S's.

You'll need extra memory to house the kernel buffered data and you'll need
extra CPU capacity to allow bdflush() to write the dirty data out to disk...
eventually... but with everything there's a cost: extra memory and free CPU
cycles.

One argument is that instead of giving the O/S the extra memory (by leaving it
free) to give it to the SQL Server and let it do its caching... but that's a
different thread...

Data Integrity and Cooked File System

If the Sybase SQL Server is not certified to be used over a cooked file system,
because of the nature of the kernel buffering (see the section above) you may
face database corruption by using cooked file system anyway. The SQL Server
thinks that it has posted its changes out to disk but in reality it has gone
only to memory. If the machine halts without bdflush() having a chance to flush
memory out to disk, your database may become corrupted.

Some O/S's allow cooked files to have a write through mode and it really
depends if the SQL Server has been certified on cooked file systems. If it has,
it means that when the ASE/SQL Server opens a device which is on a file system,
it fcntl()'s the device to write-through.

When to use cooked file system?

I typically build my tempdb on cooked file system and I don't worry about data
integrity because tempdb is rebuilt every time your ASE/SQL Server is rebooted.

Back to top
-------------------------------------------------------------------------------

1.1.12: Is Sybase Y2K (Y2000) compliant?

-------------------------------------------------------------------------------

Sybase is year 2000 compliant at specific revisions of each product. Full
details are available at http://www.sybase.com, specifically (as these links
will undoubtedly change):


http://www.sybase.com/success/inc/corpinfo/year2000_int.html
http://www.sybase.com/Company/corpinfo/year2000_matrix.html

Note: Since we have made it to 2000 more or less intact, I see no reason to
include this question. I plan to remove with the next release of the FAQ. If
you feel strongly about leaving it in then let me know.

Back to top
-------------------------------------------------------------------------------

1.1.13 How Can I Run the SQL Server Upgrade Manually?

-------------------------------------------------------------------------------

How to Run the SQL Server Upgrade Manually

This document describes the steps required to perform a manual upgrade for SQL
Server from release 4.x or 10.0x to release 11.02. In most cases, however, you
should use sybinit to perform the upgrade.

BE SURE TO HAVE GOOD BACKUPS BEFORE STARTING THIS PROCEDURE.

1. Use release 11.0x sybinit to run the pre-eligibility test and Check Reserved
words. Make any necessary changes that are mentioned in the sybinit log.
The sybinit log is located in $SYBASE/init/logs/logxxxx.yyy.
2. Use isql to connect to the 4.x or 10.0x SQL Server and do the following
tasks:
a. Turn on option to allow updates to system tables:
1> sp_configure "allow updates", 1
2> go

b. Checkpoint all databases:
1> use "dbname"
2> go
1> checkpoint
2> go

c. Shutdown the 4.x or 10.0x SQL Server.
1> shutdown
2> go

3. Copy the interfaces file to the release 11.0x directory.
4. Set the environment variable SYBASE to the release 11.0x directory.
5. Copy the runserver file to the release 11.0x $SYBASE/install directory.
6. Edit the $SYBASE/install/RUN_SYBASE (runserver file) to change the path from
the 4.x or 10.x dataserver directory to the new release 11.0x directory.
7. Start SQL Server using the new runserver file.
% startserver -f$SYBASE/install/RUN_SYBASE

8. Run the upgrade program:

UNIX: $SYBASE/upgrade/upgrade -S"servername" -P"sapassword" > $SYBASE/init/
logs/mylog.log 2>&1 VMS: SYBASE_SYSTEM[SYBASE.UPGRADE]upgrade /password=
"sa_password" /servername="servername"

9. Shut down SQL server after a successful upgrade.
% isql -Usa -Pxxx
-SSYBASE
1> shutdown
2> go

10. Start SQL Server using the release 11.0x runserver file.

% startserver -f$SYBASE/install/RUN_SYBASE

11. Create the sybsystemprocs device and database if upgrading from 4.9.x. You
should create a 21mb sybsystemprocs device and database.
a. Use the disk init command to create the sybsytemprocs device and
database manually, for example:

disk init name = "sybprocsdev", physname="/dev/sybase/rel1102/
sybsystemprocs.dat", vdevno=4, size=10752 go To check to see which vdevno
is available: type 1> select distinct low/16777216 from sysdevices 2> order
by low 3> go A sample create database command: create database
sybsystemprocs on sybprocsdev=21 go Please refer to the "Sybase SQL Server
Reference Manual", for more information on these commands.

12. Run the installmaster and installmodel scripts:
UNIX: %isql -Usa -Psapassword -i$SYBASE/scripts/installmaster
UNIX: %isql -Usa -Psapassword -i$SYBASE/scripts/installmodel
VMS: $isql /user="sa" /password="sapass"
/input="[sybase_system.scripts]installm aster"
VMS: $isql /user="sa" /password="sapass"
/input="[sybase_system.scripts]installm odel"

13. If you upgraded from SQL Server 4.9.2, you will need to run sp_remap to
remap the compiled objects. Sp_remap remaps stored procedures, triggers,
rules, defaults, or views to be compatible with the current release of SQL
Server. Please refer to the Reference Manual Volume II for more information
on the sp_remap command.

The syntax for sp_remap:
sp_remap object_name

If you are upgrading to SQL Server 11.0.x and the upgrade process failed when
using sybinit, you can invoke sybinit and choose remap query tress from the
upgrade menu screen. This is a new option that is added, after a failed
upgrade.

Back to top
-------------------------------------------------------------------------------

next # ASE FAQ

David Owen

unread,
Jan 24, 2001, 5:14:15 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part7

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


General Troubleshooting


1. How do I turn off marked suspect on my database?
2. On startup, the transaction log of a database has filled and recovery has


suspended, what can I do?

next prev ASE FAQ
-------------------------------------------------------------------------------

1.4.1 How do I turn off marked suspect on my database?

-------------------------------------------------------------------------------

Say one of your database is marked suspect as the SQL Server is coming up. Here
are the steps to take to unset the flag.


Remember to fix the problem that caused the database to be marked suspect
after switching the flag.

System 11

1. sp_configure "allow updates", 1
2. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") --
save this value.
3. begin transaction
4. update sysdatabases set status = -32768 where dbid = db_id("my_hosed_db")
5. commit transaction
6. shutdown
7. startserver -f RUN_*
8. fix the problem that caused the database to be marked suspect
9. begin transaction
10. update sysdatabases set status = saved_value where dbid = db_id
("my_hosed_db")
11. commit transaction
12. sp_configure "allow updates", 0
13. reconfigure
14. shutdown
15. startserver -f RUN_*

System 10

1. sp_configure "allow updates", 1
2. reconfigure with override
3. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") -
save this value.
4. begin transaction
5. update sysdatabases set status = -32768 where dbid = db_id("my_hosed_db")
6. commit transaction
7. shutdown
8. startserver -f RUN_*
9. fix the problem that caused the database to be marked suspect
10. begin transaction
11. update sysdatabases set status = saved_value where dbid = db_id
("my_hosed_db")
12. commit transaction
13. sp_configure "allow updates", 0
14. reconfigure
15. shutdown
16. startserver -f RUN_*

Pre System 10

1. sp_configure "allow updates", 1
2. reconfigure with override
3. select status - 320 from sysdatabases where dbid = db_id("my_hosed_db") -
save this value.
4. begin transaction
5. update sysdatabases set status = -32767 where dbid = db_id("my_hosed_db")
6. commit transaction
7. you should be able to access the database for it to be cleared out. If not:
1. shutdown
2. startserver -f RUN_*

8. fix the problem that caused the database to be marked suspect
9. begin transaction
10. update sysdatabases set status = saved_value where dbid = db_id
("my_hosed_db")
11. commit transaction
12. sp_configure "allow updates", 0
13. reconfigure

Return to top
-------------------------------------------------------------------------------

1.4.2 On startup, the transaction log of a database has filled and recovery has
suspended, what can I do?

-------------------------------------------------------------------------------

You might find the following in the error log:

00:00000:00001:2000/01/04 07:43:42.68 server Can't allocate space for object
'syslogs' in database 'DBbad' because 'logsegment' segment is full/has no free
extents. If you ran out of space in syslogs, dump the transaction log.
Otherwise, use ALTER DATABASE or sp_extendsegment to increase size of the
segment.
00:00000:00001:2000/01/04 07:43:42.68 server Error: 3475, Severity: 21, State:
7
00:00000:00001:2000/01/04 07:43:42.68 server There is no space available in
SYSLOGS for process 1 to log a record for which space has been reserved. This
process will retry at intervals of one minute. The internal error number is -4.

which can prevent ASE from starting properly. A neat solution from Sean Kiely
(sean....@sybase.com) of Sybase Technical Support, that works if the database
has any "data only" segments. Obviously this method does not apply to the
master database. The Sybase Trouble Shooting Guide has very good coverage of
recovering the master database.

1. You will have to bring the server up with trace flag 3608 to prevent the
recovery of the user databases.
2. sp_configure "allow updates",1
go
3. Write down the segmap entries from the sysusages table for the toasted
database.
4. update sysusages
set segmap = 7
where dbid = db_id("my_toasted_db")
and segmap = 3
5. select status - 320
from sysdatabases
where dbid = db_id("my_toasted_db") -- save this value.
go
begin transaction
update sysdatabases set status = -32768 where dbid = db_id("my_toasted_db")
go -- if all is OK, then...
commit transaction
go
shutdown
go
6. Restart the server without the trace flag. With luck it should now have
enough space to recover. If it doesn't, you are in deeper trouble than
before, you do have a good, recent backup don't you?
7. dump database my_toasted_db with truncate_only
go
8. Reset the segmap entries in sysusages to be those as saved in 3. above.
9. Shutdown ASE and restart. (The traceflag should have gone at step 6., but
ensure that it is not there!)

David Owen

unread,
Jan 24, 2001, 5:14:17 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part13

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


SQL Advanced



6.2.1 How to emulate the Oracle decode function/crosstab
6.2.2 How to implement if-then-else within a select-clause.
6.2.3 deleted due to copyright hassles with the publisher
6.2.4 How to pad with leading zeros an int or smallint.
6.2.5 Divide by zero and nulls.
6.2.6 Convert months to financial months.
6.2.7 Hierarchy traversal - BOMs.
6.2.8 Is it possible to call a UNIX command from within a stored
procedure or a trigger?
6.2.9 Information on Identities and Rolling your own Sequential Keys
6.2.10 How can I execute dynamic SQL with ASE/SQL Server?

next prev ASE FAQ
-------------------------------------------------------------------------------

6.2.1: How to emulate the Oracle decode function/crosstab

-------------------------------------------------------------------------------

There is a neat way to use boolean logic to perform cross-tab or rotation
queries easily, and very efficeintly. Using the aggregate 'Group By' clause in
a query and the ISNULL(), SIGN(), ABS(), SUBSTRING() and CHARINDEX() functions,
you can create queries and views to perform all kinds of summarizations.


This technique does not produce easily understood SQL statements.

If you want to test a field to see if it is equal to a value, say 100, use the
following code:
SELECT (1- ABS( SIGN( ISNULL( 100 - <field>, 1))))

The innermost function will return 1 when the field is null, a positive value
if the field < 100, a negative value if the field is > 100 and will return 0 if
the field = 100. This example is for Sybase or Microsoft SQL server, but other
servers should support most of these functions or the COALESCE() function,
which is the ANSI equivalent to Isnull.

The SIGN() function returns zero for a zero value, -1 for a negative value, 1
for a positive value The ABS() function returns zero for a zero value, and 1
for any non-zero value.

Put it all together and you get '0' if the value match, and '1' if they don't.
This is not that useful, so we subtract this return value from '1' to invert
it, giving us a TRUE value of '1' and a false value of '0'. These return values
can then be multiplied by the value of another column, or used within the
parameters of another function like SUBSTRING() to return a conditional text
value.

For example, to create a grid from a student registration table containing
STUDENT_ID and COURSE_ID columns, where there are 5 courses (101, 105, 201,
210, 300) use the following query:

SELECT STUDENT_ID,
(1- ABS( SIGN( ISNULL( 101 - COURSE_ID, 1)))) COURSE_101,
(1- ABS( SIGN( ISNULL( 105 - COURSE_ID, 1)))) COURSE_105,
(1- ABS( SIGN( ISNULL( 201 - COURSE_ID, 1)))) COURSE_201,
(1- ABS( SIGN( ISNULL( 210 - COURSE_ID, 1)))) COURSE_210,
(1- ABS( SIGN( ISNULL( 300 - COURSE_ID, 1)))) COURSE_300
GROUP BY STUDENT_ID
ORDER BY STUDENT_ID

Back to top
-------------------------------------------------------------------------------

6.2.2: How to implement if-then-else in a select clause

-------------------------------------------------------------------------------

If you need to implement the following condition in a select clause:
if @val = 'small' then
print 'petit'
else
print 'grand'
fi

do the following:
select isnull(substring('petit', charindex('small', @val), 255), 'grand')

To test it out, try the following T-SQL:

declare @val char(20)

select @val = 'grand'

select isnull(substring('petit', charindex('small', @val), 255), 'grand')

Back to top
-------------------------------------------------------------------------------

6.2.3: Removed

-------------------------------------------------------------------------------

6.2.4: How to pad with leading zeros an int or smallint.

-------------------------------------------------------------------------------

By example:
declare @Integer int

/* Good for positive numbers only. */
select @Integer = 1000

select "Positives Only" =
right( replicate("0", 12) + convert(varchar, @Integer), 12)

/* Good for positive and negative numbers. */
select @Integer = -1000

select "Both Signs" =
substring( "- +", (sign(@Integer) + 2), 1) +
right( replicate("0", 12) + convert(varchar, abs(@Integer)), 12)

select @Integer = 1000

select "Both Signs" =
substring( "- +", (sign(@Integer) + 2), 1) +
right( replicate("0", 12) + convert(varchar, abs(@Integer)), 12)

go

Produces the following results:
Positives Only
--------------
000000001000

Both Signs
-------------
-000000001000

Both Signs
-------------
+000000001000

Back to top
-------------------------------------------------------------------------------

6.2.5: Divide by zero and nulls

-------------------------------------------------------------------------------

During processing, if a divide by zero error occurs you will not get the answer
you want. If you want the result set to come back and null to be displayed
where divide by zero occurs do the following:
1> select * from total_temp
2> go
field1 field2
----------- -----------
10 10
10 0
10 NULL

(3 rows affected)
1> select field1, field1/(field2*convert(int,
substring('1',1,abs(sign(field2))))) from total_temp
2> go
field1
----------- -----------
10 1
10 NULL
10 NULL

Back to top
-------------------------------------------------------------------------------

6.2.6: Convert months to financial months

-------------------------------------------------------------------------------

To convert months to financial year months (i.e. July = 1, Dec = 6, Jan = 7,
June = 12 )

Method #1

select ... ((sign(sign((datepart(month,GetDate())-6) * -1)+1) *
(datepart(month, GetDate())+6))
+ (sign(sign(datepart(month, GetDate())-7)+1) *
(datepart(month, GetDate())-6)))
...
from ...

Method #2

select charindex(datename(month,getdate()),
" July August September October November December
January Febuary March April May June "
) / 10

In the above example, the embedded blanks are significant.

Back to top
-------------------------------------------------------------------------------

David Owen

unread,
Jan 24, 2001, 5:14:15 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part8

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


Performance and Tuning




1.5.1 What are the nitty gritty details on Performance and Tuning?
1.5.2 What is best way to use temp tables in an OLTP environment?
1.5.3 What's the difference between clustered and non-clustered
indexes?
1.5.4 Optimistic versus Pessimistic locking?
1.5.5 How do I force an index to be used?
1.5.6 Why place tempdb and log on low numbered devices?
1.5.7 Have I configured enough memory for ASE/SQL Server?
1.5.8 Why should I use stored procedures?
1.5.9 I don't understand showplan's output, please explain.
1.5.10 Poor man's sp_sysmon.
1.5.11 View MRU-LRU procedure cache chain.
1.5.12 Improving Text/Image Type Performance

next prev ASE FAQ
-------------------------------------------------------------------------------

1.5.1: Sybase ASE/SQL Server Performance and Tuning

-------------------------------------------------------------------------------

Before going any further, Eric Miner (eric....@sybase.com) has made available
two presentations that he made at Techwave 1999. The first covers the use of
optdiag. The second covers features in the way the optimiser works in ASE
11.9.2 and 12. These are Powerpoint slides converted to web pages, so they
might be tricky to read with a text based browser!

All Components Affect Response Time & Throughput

We often think that high performance is defined as a fast data server, but the
picture is not that simple. Performance is determined by all these factors:

*The client application itself:
+How efficiently is it written?
+We will return to this later, when we look at application tuning.

*The client-side library:
+What facilities does it make available to the application?
+How easy are they to use?

*The network:
+How efficiently is it used by the client/server connection?

*The DBMS:
+How effectively can it use the hardware?
+What facilities does it supply to help build efficient fast
applications?

*The size of the database:
+How long does it take to dump the database?
+How long to recreate it after a media failure?

Unlike some products which aim at performance on paper, Sybase aims at solving
the multi-dimensional problem of delivering high performance for real
applications.

OBJECTIVES

To gain an overview of important considerations and alternatives for the
design, development, and implementation of high performance systems in the
Sybase client/server environment. The issues we will address are:

*Client Application and API Issues
*Physical Database Design Issues
*Networking Issues
*Operating System Configuration Issues
*Hardware Configuration Issues
*SQL Server Configuration Issues


Client Application and Physical Database Design design decisions will
account for over 80% of your system's "tuneable" performance so ... plan
your project resources accordingly !

It is highly recommended that every project include individuals who have taken
Sybase Education's Performance and Tuning course. This 5-day course provides
the hands-on experience essential for success.

Client Application Issues

*Tuning Transact-SQL Queries
*Locking and Concurrency
*ANSI Changes Affecting Concurrency
*Application Deadlocking
*Optimizing Cursors in v10
*Special Issues for Batch Applications
*Asynchronous Queries
*Generating Sequential Numbers
*Other Application Issues

Tuning Transact-SQL Queries

*Learn the Strengths and Weaknesses of the Optimizer
*One of the largest factors determining performance is TSQL! Test not only
for efficient plans but also semantic correctness.
*Optimizer will cost every permutation of accesses for queries involving 4
tables or less. Joins of more than 4 tables are "planned" 4-tables at a
time (as listed in the FROM clause) so not all permutations are evaluated.
You can influence the plans for these large joins by the order of tables in
the FROM clause.
*Avoid the following, if possible:
+What are SARGS?

This is short for search arguments. A search argument is essentially a
constant value such as:
o"My company name"
o3448

but not:
o344 + 88
olike "%what you want%"

+Mathematical Manipulation of SARGs

SELECT name FROM employee WHERE salary * 12 > 100000

+Use of Incompatible Datatypes Between Column and its SARG

Float & Int, Char & Varchar, Binary & Varbinary are Incompatible;

Int & Intn (allow nulls) OK

+Use of multiple "OR" Statements - especially on different columns in
same table. If any portion of the OR clause requires a table scan, it
will! OR Strategy requires additional cost of creating and sorting a
work table.
+Not using the leading portion of the index (unless the query is
completely covered)
+Substituting "OR" with "IN (value1, value2, ... valueN) Optimizer
automatically converts this to an "OR"
+Use of Non-Equal Expressions (!=) in WHERE Clause.

*Use Tools to Evaluate and Tune Important/Problem Queries
+Use the "set showplan on" command to see the plan chosen as "most
efficient" by optimizer. Run all queries through during development and
testing to ensure accurate access model and known performance.
Information comes through the Error Handler of a DB-Library
application.
+Use the "dbcc traceon(3604, 302, 310)" command to see each
alternative plan evaluated by the optimizer. Generally, this is only
necessary to understand why the optimizer won't give you the plan you
want or need (or think you need)!
+Use the "set statistics io on" command to see the number of logical
and physical i/o's for a query. Scrutinize those queries with high
logical i/o's.
+Use the "set statistics time on" command to see the amount of time
(elapsed, execution, parse and compile) a query takes to run.
+If the optimizer turns out to be a "pessimizer", use the "set
forceplan on" command to change join order to be the order of the
tables in the FROM clause.
+If the optimizer refuses to select the proper index for a table, you
can force it by adding the index id in parentheses after the table name
in the FROM clause.

SELECT * FROM orders(2), order_detail(1) WHERE ...

This may cause portability issues should index id's vary/change by
site !

Locking and Concurrency

*The Optimizer Decides on Lock Type and Granularity
*Decisions on lock type (share, exclusive, or update) and granularity
(page or table) are made during optimization so make sure your updates and
deletes don't scan the table !
*Exclusive Locks are Only Released Upon Commit or Rollback
*Lock Contention can have a large impact on both throughput and response
time if not considered both in the application and database design !
*Keep transactions as small and short as possible to minimize blocking.
Consider alternatives to "mass" updates and deletes such as a v10.0 cursor
in a stored procedure which frequently commits.
*Never include any "user interaction" in the middle of transactions.
*Shared Locks Generally Released After Page is Read
*Share locks "roll" through result set for concurrency. Only "HOLDLOCK" or
"Isolation Level 3" retain share locks until commit or rollback. Remember
also that HOLDLOCK is for read-consistency. It doesn't block other readers
!
*Use optimistic locking techniques such as timestamps and the tsequal()
function to check for updates to a row since it was read (rather than
holdlock)

ANSI Changes Affecting Concurrency

*Chained Transactions Risk Concurrency if Behavior not Understood
*Sybase defaults each DML statement to its own transaction if not
specified ;
*ANSI automatically begins a transaction with any SELECT, FETCH, OPEN,
INSERT, UPDATE, or DELETE statement ;
*If Chained Transaction must be used, extreme care must be taken to ensure
locks aren't left held by applications unaware they are within a
transaction! This is especially crucial if running at Level 3 Isolation
*Lock at the Level of Isolation Required by the Query
*Read Consistency is NOT a requirement of every query.
*Choose level 3 only when the business model requires it
*Running at Level 1 but selectively applying HOLDLOCKs as needed is safest
*If you must run at Level 3, use the NOHOLDLOCK clause when you can !
*Beware of (and test) ANSI-compliant third-party applications for
concurrency

Application Deadlocking

Prior to SQL Server 10 cursors, many developers simulated cursors by using two
or more connections (dbproc's) and divided the processing between them. Often,
this meant one connection had a SELECT open while "positioned" UPDATEs and
DELETEs were issued on the other connection. The approach inevitably leads to
the following problem:

1. Connection A holds a share lock on page X (remember "Rows Pending" on SQL
Server leave a share lock on the "current" page).
2. Connection B requests an exclusive lock on the same page X and waits...
3. The APPLICATION waits for connection B to succeed before invoking whatever
logic will remove the share lock (perhaps dbnextrow). Of course, that never
happens ...

Since Connection A never requests a lock which Connection B holds, this is NOT
a true server-side deadlock. It's really an "application" deadlock !

Design Alternatives

1. Buffer additional rows in the client that are "nonupdateable". This forces
the shared lock onto a page on which the application will not request an
exclusive lock.
2. Re-code these modules with CT-Library cursors (aka. server-side cursors).
These cursors avoid this problem by disassociating command structures from
connection structures.
3. Re-code these modules with DB-Library cursors (aka. client-side cursors).
These cursors avoid this problem through buffering techniques and
re-issuing of SELECTs. Because of the re-issuing of SELECTs, these cursors
are not recommended for high transaction sites !

Optimizing Cursors with v10.0

*Always Declare Cursor's Intent (i.e. Read Only or Updateable)
*Allows for greater control over concurrency implications
*If not specified, SQL Server will decide for you and usually choose
updateable
*Updateable cursors use UPDATE locks preventing other U or X locks
*Updateable cursors that include indexed columns in the update list may
table scan
*SET Number of Rows for each FETCH
*Allows for greater Network Optimization over ANSI's 1- row fetch
*Rows fetched via Open Client cursors are transparently buffered in the
client:
FETCH -> Open Client <- N rows
Buffers

*Keep Cursor Open on a Commit / Rollback
*ANSI closes cursors with each COMMIT causing either poor throughput (by
making the server re-materialize the result set) or poor concurrency (by
holding locks)
*Open Multiple Cursors on a Single Connection
*Reduces resource consumption on both client and Server
*Eliminates risk of a client-side deadlocks with itself

Special Issues for Batch Applications

SQL Server was not designed as a batch subsystem! It was designed as an RBDMS
for large multi-user applications. Designers of batch-oriented applications
should consider the following design alternatives to maximize performance :

Design Alternatives :

*Minimize Client/Server Interaction Whenever Possible
*Don't turn SQL Server into a "file system" by issuing single table /
single row requests when, in actuality, set logic applies.
*Maximize TDS packet size for efficient Interprocess Communication (v10
only)
*New SQL Server 10.0 cursors declared and processed entirely within stored
procedures and triggers offer significant performance gains in batch
processing.
*Investigate Opportunities to Parallelize Processing
*Breaking up single processes into multiple, concurrently executing,
connections (where possible) will outperform single streamed processes
everytime.
*Make Use of TEMPDB for Intermediate Storage of Useful Data

Asynchronous Queries

Many, if not most, applications and 3rd Party tools are coded to send queries
with the DB-Library call dbsqlexec( ) which is a synchronous call ! It sends a
query and then waits for a response from SQL Server that the query has
completed !

Designing your applications for asynchronous queries provides many benefits:

1. A "Cooperative" multi-tasking application design under Windows will allow
users to run other Windows applications while your long queries are
processed !
2. Provides design opportunities to parallize work across multiple SQL Server
connections.

Implementation Choices:

*System 10 Client Library Applications:
*True asynchronous behaviour is built into the entire library. Through the
appropriate use of call-backs, asynchronous behavior is the normal
processing paradigm.
*Windows DB-Library Applications (not true async but polling for data):
*Use dbsqlsend(), dbsqlok(), and dbdataready() in conjunction with some
additional code in WinMain() to pass control to a background process. Code
samples which outline two different Windows programming approaches (a
PeekMessage loop and a Windows Timer approach) are available in the
Microsoft Software Library on Compuserve (GO MSL). Look for SQLBKGD.ZIP
*Non-PC DB-Library Applications (not true async but polling for data):
*Use dbsqlsend(), dbsqlok(), and dbpoll() to utilize non-blocking
functions.

Generating Sequential Numbers Many applications use unique sequentially
increasing numbers, often as primary keys. While there are good benefits to
this approach, generating these keys can be a serious contention point if not
careful. For a complete discussion of the alternatives, download Malcolm
Colton's White Paper on Sequential Keys from the SQL Server Library of our
OpenLine forum on Compuserve.

The two best alternatives are outlined below.

1. "Primary Key" Table Storing Last Key Assigned
+Minimize contention by either using a seperate "PK" table for each
user table or padding out each row to a page. Make sure updates are
"in-place".
+Don't include the "PK" table's update in the same transaction as the
INSERT. It will serialize the transactions.
BEGIN TRAN

UPDATE pk_table SET nextkey = nextkey + 1
[WHERE table_name = @tbl_name]
COMMIT TRAN

/* Now retrieve the information */
SELECT nextkey FROM pk_table
WHERE table_name = @tbl_name]


+"Gap-less" sequences require additional logic to store and retrieve
rejected values

2. IDENTITY Columns (v10.0 only)
+Last key assigned for each table is stored in memory and
automatically included in all INSERTs (BCP too). This should be the
method of choice for performance.
+Choose a large enough numeric or else all inserts will stop once the
max is hit.
+Potential rollbacks in long transactions may cause gaps in the
sequence !

Other Application Issues

+Transaction Logging Can Bottleneck Some High Transaction Environments
+Committing a Transaction Must Initiate a Physical Write for
Recoverability
+Implementing multiple statements as a transaction can assist in these
environment by minimizing the number of log writes (log is flushed to
disk on commits).
+Utilizing the Client Machine's Processing Power Balances Load
+Client/Server doesn't dictate that everything be done on Server!
+Consider moving "presentation" related tasks such as string or
mathematical manipulations, sorting, or, in some cases, even
aggregating to the client.
+Populating of "Temporary" Tables Should Use "SELECT INTO" - balance
this with dynamic creation of temporary tables in an OLTP environment.
Dynamic creation may cause blocks in your tempdb.
+"SELECT INTO" operations are not logged and thus are significantly
faster than there INSERT with a nested SELECT counterparts.
+Consider Porting Applications to Client Library Over Time
+True Asynchronous Behavior Throughout Library
+Array Binding for SELECTs
+Dynamic SQL
+Support for ClientLib-initiated callback functions
+Support for Server-side Cursors
+Shared Structures with Server Library (Open Server 10)

Physical Database Design Issues

+Normalized -vs- Denormalized Design
+Index Selection
+Promote "Updates-in-Place" Design
+Promote Parallel I/O Opportunities

Normalized -vs- Denormalized

+Always Start with a Completely Normalized Database
+Denormalization should be an optimization taken as a result of a
performance problem
+Benefits of a normalized database include :
1. Accelerates searching, sorting, and index creation since tables are
narrower
2. Allows more clustered indexes and hence more flexibility in tuning
queries, since there are more tables ;
3. Accelerates index searching since indexes tend to be narrower and
perhaps shorter ;
4. Allows better use of segments to control physical placement of
tables ;
5. Fewer indexes per table, helping UPDATE, INSERT, and DELETE
performance ;
6. Fewer NULLs and less redundant data, increasing compactness of the
database ;
7. Accelerates trigger execution by minimizing the extra integrity work
of maintaining redundant data.
8. Joins are Generally Very Fast Provided Proper Indexes are Available
9. Normal caching and cindextrips parameter (discussed in Server
section) means each join will do on average only 1-2 physical I/Os.
10. Cost of a logical I/O (get page from cache) only 1-2 milliseconds.


3. There Are Some Good Reasons to Denormalize
1. All queries require access to the "full" set of joined data.
2. Majority of applications scan entire tables doing joins.
3. Computational complexity of derived columns require storage for SELECTs
4. Others ...

Index Selection

+Without a clustered index, all INSERTs and "out-of-place" UPDATEs go
to the last page. The lock contention in high transaction environments
would be prohibitive. This is also true for INSERTs to a clustered
index on a monotonically increasing key.
+High INSERT environments should always cluster on a key which
provides the most "randomness" (to minimize lock / device contention)
that is usable in many queries. Note this is generally not your primary
key !
+Prime candidates for clustered index (in addition to the above)
include :
oColumns Accessed by a Range
oColumns Used with Order By, Group By, or Joins

+Indexes Help SELECTs and Hurt INSERTs
+Too many indexes can significantly hurt performance of INSERTs and
"out-of-place" UPDATEs.
+Prime candidates for nonclustered indexes include :
oColumns Used in Queries Requiring Index Coverage
oColumns Used to Access Less than 20% (rule of thumb) of the Data.

+Unique indexes should be defined as UNIQUE to help the optimizer
+Minimize index page splits with Fillfactor (helps concurrency and
minimizes deadlocks)
+Keep the Size of the Key as Small as Possible
+Accelerates index scans and tree traversals
+Use small datatypes whenever possible . Numerics should also be used
whenever possible as they compare faster than strings.

Promote "Update-in-Place" Design

+"Update-in-Place" Faster by Orders of Magnitude
+Performance gain dependent on number of indexes. Recent benchmark
(160 byte rows, 1 clustered index and 2 nonclustered) showed 800%
difference!
+Alternative ("Out-of-Place" Update) implemented as a physical DELETE
followed by a physical INSERT. These tactics result in:
1. Increased Lock Contention
2. Increased Chance of Deadlock
3. Decreased Response Time and Throughput

+Currently (System 10 and below), Rules for "Update-in-Place" Behavior
Include :
1. Columns updated can not be variable length or allow nulls
2. Columns updated can not be part of an index used to locate the row
to update
3. No update trigger on table being updated (because the inserted and
deleted tables used in triggers get their data from the log)

In v4.9.x and below, only one row may be affected and the
optimizer must know this in advance by choosing a UNIQUE index.
System 10 eliminated this limitation.



Promote Parallel I/O Opportunities

+For I/O-bound Multi-User Systems, Use A lot of Logical and Physical
Devices
+Plan balanced separation of objects across logical and physical
devices.
+Increased number of physical devices (including controllers) ensures
physical bandwidth
+Increased number of logical Sybase devices ensures minimal contention
for internal resources. Look at SQL Monitor's Device I/O Hit Rate for
clues. Also watch out for the 128 device limit per database.
+Create Database (in v10) starts parallel I/O on up to 6 devices at a
time concurrently. If taken advantage of, expect an 800% performance
gain. A 2Gb TPC-B database that took 4.5 hours under 4.9.1 to create
now takes 26 minutes if created on 6 independent devices !
+Use Sybase Segments to Ensure Control of Placement

This is the only way to guarantee logical seperation of objects on
devices to reduce contention for internal resources.

+Dedicate a seperate physical device and controller to the transaction
log in tempdb too.
+optimize TEMPDB Also if Heavily Accessed
+increased number of logical Sybase devices ensures minimal contention
for internal resources.
+systems requiring increased log throughput today must partition
database into separate databases

Breaking up one logical database into multiple smaller databases
increases the number number of transaction logs working in parallel.


Networking Issues

+Choice of Transport Stacks
+Variable Sized TDS Packets
+TCP/IP Packet Batching

Choice of Transport Stacks for PCs

+Choose a Stack that Supports "Attention Signals" (aka. "Out of Band
Data")
+Provides for the most efficient mechanism to cancel queries.
+Essential for sites providing ad-hoc query access to large databases.
+Without "Attention Signal" capabilities (or the urgent flag in the
connection string), the DB-Library functions DBCANQUERY ( ) and
DBCANCEL ( ) will cause SQL Server to send all rows back to the Client
DB-Library as quickly as possible so as to complete the query. This can
be very expensive if the result set is large and, from the user's
perspective, causes the application to appear as though it has hung.
+With "Attention Signal" capabilities, Net-Library is able to send an
out-of-sequence packet requesting the SQL Server to physically throw
away any remaining results providing for instantaneous response.
+Currently, the following network vendors and associated protocols
support the an "Attention Signal" capable implementation:
1. NetManage NEWT
2. FTP TCP
3. Named Pipes (10860) - Do not use urgent parameter with this Netlib
4. Novell LAN Workplace v4.1 0 Patch required from Novell
5. Novell SPX - Implemented internally through an "In-Band" packet
6. Wollongong Pathway
7. Microsoft TCP - Patch required from Microsoft


Variable-sized TDS Packets

Pre-v4.6 TDS Does Not Optimize Network Performance Current SQL Server TDS
packet size limited to 512 bytes while network frame sizes are
significantly larger (1508 bytes on Ethernet and 4120 bytes on Token Ring).

The specific protocol may have other limitations!

For example:
+IPX is limited to 576 bytes in a routed network.
+SPX requires acknowledgement of every packet before it will send
another. A recent benchmark measured a 300% performance hit over TCP in
"large" data transfers (small transfers showed no difference).
+Open Client Apps can "Request" a Larger Packet Shown to have
significant performance improvement on "large" data transfers such as
BCP, Text / Image Handling, and Large Result Sets.
oclients:
#isql -Usa -Annnnn
#bcp -Usa -Annnnn
#ct_con_props (connection, CS_SET, CS_PACKETSIZE, &packetsize,
sizeof(packetsize), NULL)

oAn "SA" must Configure each Servers' Defaults Properly
#sp_configure "default packet size", nnnnn - Sets default
packet size per client connection (defaults to 512)
#sp_configure "maximum packet size", nnnnn - Sets maximum TDS
packet size per client connection (defaults to 512)
#sp_configure "additional netmem", nnnnn - Additional memory
for large packets taken from separate pool. This memory does
not come from the sp_configure memory setting.

Optimal value = ((# connections using large packets large
packetsize * 3) + an additional 1-2% of the above calculation
for overhead)

Each connection using large packets has 3 network buffers: one
to read; one to write; and one overflow.
@Default network memory - Default-sized packets come from
this memory pool.
@Additional Network memory - Big packets come this memory
pool.

If not enough memory is available in this pool, the server
will give a smaller packet size, down to the default





TCP/IP Packet Batching

+TCP Networking Layer Defaults to "Packet Batching"
+This means that TCP/IP will batch small logical packets into one
larger physical packet by briefly delaying packets in an effort to fill
the physical network frames (Ethernet, Token-Ring) with as much data as
possible.
+Designed to improve performance in terminal emulation environments
where there are mostly only keystrokes being sent across the network.
+Some Environments Benefit from Disabling Packet Batching
+Applies mainly to socket-based networks (BSD) although we have seen
some TLI networks such as NCR's benefit.
+Applications sending very small result sets or statuses from sprocs
will usually benefit. Benchmark with your own application to be sure.
+This makes SQL Server open all connections with the TCP_NODELAY
option. Packets will be sent regardless of size.
+To disable packet batching, in pre-Sys 11, start SQL Server with the
1610 Trace Flag.

$SYBASE/dataserver -T1610 -d /usr/u/sybase/master.dat ...

Your errorlog will indicate the use of this option with the message:

SQL Server booted with TCP_NODELAY enabled.


Operating System Issues

+Never Let SQL Server Page Fault
+It is better to configure SQL Server with less memory and do more
physical database I/O than to page fault. OS page faults are
synchronous and stop the entire dataserver engine until the page fault
completes. Since database I/O's are asynchronous, other user tasks can
continue!
+Use Process Affinitying in SMP Environments, if Supported
+Affinitying dataserver engines to specific CPUs minimizes overhead
associated with moving process information (registers, etc) between
CPUs. Most implementations will preference other tasks onto other CPUs
as well allowing even more CPU time for dataserver engines.
+Watch out for OS's which are not fully symmetric. Affinitying
dataserver engines onto CPUs that are heavily used by the OS can
seriously degrade performance. Benchmark with your application to find
optimal binding.
+Increase priority of dataserver engines, if supported
+Give SQL Server the opportunity to do more work. If SQL Server has
nothing to do, it will voluntarily yield the CPU.
+Watch out for OS's which externalize their async drivers. They need
to run too!
+Use of OS Monitors to Verify Resource Usage
+The OS CPU monitors only "know" that an instruction is being
executed. With SQL Server's own threading and scheduling, it can
routinely be 90% idle when the OS thinks its 90% busy. SQL Monitor
shows real CPU usage.
+Look into high disk I/O wait time or I/O queue lengths. These
indicate physical saturation points in the I/O subsystem or poor data
distribution.
+Disk Utilization above 50% may be subject to queuing effects which
often manifest themselves as uneven response times.
+Look into high system call counts which may be symptomatic of
problems.
+Look into high context switch counts which may also be symptomatic of
problems.
+Optimize your kernel for SQL Server (minimal OS file buffering,
adequate network buffers, appropriate KEEPALIVE values, etc).
+Use OS Monitors and SQL Monitor to Determine Bottlenecks
+Most likely "Non-Application" contention points include:
Resource Where to Look
--------- --------------
CPU Performance SQL Monitor - CPU and Trends

Physical I/O Subsystem OS Monitoring tools - iostat, sar...

Transaction Log SQL Monitor - Device I/O and
Device Hit Rate
on Log Device

SQL Server Network Polling SQL Monitor - Network and Benchmark
Baselines

Memory SQL Monitor - Data and Cache
Utilization


+Use of Vendor-support Striping such as LVM and RAID
+These technologies provide a very simple and effective mechanism of
load balancing I/O across physical devices and channels.
+Use them provided they support asynchronous I/O and reliable writes.
+These approaches do not eliminate the need for Sybase segments to
ensure minimal contention for internal resources.
+Non-read-only environments should expect performance degradations
when using RAID levels other than level 0. These levels all include
fault tolerance where each write requires additional reads to calculate
a "parity" as well as the extra write of the parity data.

Hardware Configuration Issues

+Number of CPUs
+Use information from SQL Monitor to assess SQL Server's CPU usage.
+In SMP environments, dedicate at least one CPU for the OS.
+Advantages and scaling of VSA is application-dependent. VSA was
architected with large multi-user systems in mind.
+I/O Subsystem Configuration
+Look into high Disk I/O Wait Times or I/O Queue Lengths. These may
indicate physical I/O saturation points or poor data distribution.
+Disk Utilization above 50% may be subject to queuing effects which
often manifest themselves as uneven response times.
+Logical Volume configurations can impact performance of operations
such as create database, create index, and bcp. To optimize for these
operations, create Logical Volumes such that they start on different
channels / disks to ensure I/O is spread across channels.
+Discuss device and controller throughput with hardware vendors to
ensure channel throughput high enough to drive all devices at maximum
rating.

General SQL Server Tuning

+Changing Values with sp_configure or buildmaster

It is imperative that you only use sp_configure to change those
parameters that it currently maintains because the process of
reconfiguring actually recalculates a number of other buildmaster
parameters. Using the Buildmaster utility to change a parameter
"managed" by sp_configure may result in a mis-configured server and
cause adverse performance or even worse ...

+Sizing Procedure Cache
oSQL Server maintains an MRU-LRU chain of stored procedure query
plans. As users execute sprocs, SQL Server looks in cache for a
query plan to use. However, stored procedure query plans are
currently not re-entrant! If a query plan is available, it is
placed on the MRU and execution begins. If no plan is in memory, or
if all copies are in use, a new copy is read from the sysprocedures
table. It is then optimized and put on the MRU for execution.
oUse dbcc memusage to evaluate the size and number of each sproc
currently in cache. Use SQL Monitor's cache statistics to get your
average cache hit ratio. Ideally during production, one would hope
to see a high hit ratio to minimize the procedure reads from disk.
Use this information in conjuction with your desired hit ratio to
calculate the amount of memory needed.

+Memory
oTuning memory is more a price/performance issue than anything
else ! The more memory you have available, the greater than
probability of minimizing physical I/O. This is an important goal
though. Not only does physical I/O take significantly longer, but
threads doing physical I/O must go through the scheduler once the I
/O completes. This means that work on behalf of the thread may not
actually continue to execute for quite a while !
oThere are no longer (as of v4.8) any inherent limitations in SQL
Server which cause a point of diminishing returns on memory size.
oCalculate Memory based on the following algorithm :

Total Memory = Dataserver Executable Size (in bytes) +
Static Overhead of 1 Mb +
User Connections x 40,960 bytes +
Open Databases x 644 bytes +
Locks x 32 bytes +
Devices x 45,056 bytes +
Procedure Cache +
Data Cache


+Recovery Interval
oAs users change data in SQL Server, only the transaction log is
written to disk right away for recoverability. "Dirty" data and
index pages are kept in cache and written to disk at a later time.
This provides two major benefits:
1. Many transactions may change a page yet only one physical write
is done
2. SQL Server can schedule the physical writes "when appropriate"

oSQL Server must eventually write these "dirty" pages to disk.
oA checkpoint process wakes up periodically and "walks" the cache
chain looking for dirty pages to write to disk
oThe recovery interval controls how often checkpoint writes dirty
pages.

+Tuning Recovery Interval
oA low value may cause unnecessary physical I/O lowering
throughput of the system. Automatic recovery is generally much
faster during boot-up.
oA high value minimizes unnecessary physical I/O and helps
throughput of the system. Automatic recovery may take substantial
time during boot-up.


Audit Performance Tuning for v10.0

+Potentially as Write Intensive as Logging
+Isolate Audit I/O from other components.
+Since auditing nearly always involves sequential writes, RAID Level 0
disk striping or other byte-level striping technology should provide
the best performance (theoretically).
+Size Audit Queue Carefully
+Audit records generated by clients are stored in an in memory audit
queue until they can be processed.
+Tune the queue's size with sp_configure "audit queue size", nnnn (in
rows).
+Sizing this queue too small will seriously impact performance since
all user processes who generate audit activity will sleep if the queue
fills up.
+Size Audit Database Carefully
+Each audit row could require up to 416 bytes depending on what is
audited.
+Sizing this database too small will seriously impact performance
since all user processes who generate audit activity will sleep if the
database fills up.

Back to top
-------------------------------------------------------------------------------

1.5.2: Temp Tables and OLTP

-------------------------------------------------------------------------------

Our shop would like to inform folks of a potential problem when using temporary
tables in an OLTP environment. Using temporary tables dynamically in a OLTP
production environment may result in blocking (single-threading) as the number
of transactions using the temporary tables increases.

Does it affect my application?

This warning only applies for SQL, that is being invoked frequently in an OLTP
production environment, where the use of "select into..." or "create table #
temp" is common. Application using temp tables may experience blocking problems
as the number of transactions increases.

This warning does not apply to SQL that may be in a report or that is not used
frequently. Frequently is defined as several times per second.

Why? Why? Why?

Our shop was working with an application owner to chase down a problem they
were having during peak periods. The problem they were having was severe
blocking in tempdb.

What was witnessed by the DBA group was that as the number of transactions
increased on this particular application, the number of blocks in tempdb also
increased.

We ran some independent tests to simulate a heavily loaded server and
discovered that the data pages in contention were in tempdb's syscolumns'
table.

This actually makes sense because during table creation entries are added to
this table, regardless if it's a temporary or permanent table.

We ran another simulation where we created the tables before the stored
procedure used it and the blocks went away. We then performed an additional
test to determine what impact creating temporary tables dynamically would have
on the server and discovered that there is a 33% performance gain by creating
the tables once rather than re-creating them.

Your mileage may vary.

How do I fix this?

To make things better, do the 90's thing -- reduce and reuse your temp tables.
During one application connection/session, aim to create the temp tables only
once.

Let's look at the lifespan of a temp table. If temp tables are created in a
batch within a connection, then all future batches and stored procs will have
access to such temp tables until they're dropped; this is the reduce and reuse
strategy we recommend. However, if temp tables are created in a stored proc,
then the database will drop the temp tables when the stored proc ends, and this
means repeated and multiple temp table creations; you want to avoid this.

Recode your stored procedures so that they assume that the temporary tables
already exist, and then alter your application so that it creates the temporary
tables at start-up -- once and not every time the stored procedure is invoked.

That's it! Pretty simple eh?

Summary

The upshot is that you can realize roughly a 33% performance gain and not
experience the blocking which is difficult to quantify due to the specificity
of each application.

Basically, you cannot lose.

Solution in pseudo-code

If you have an application that creates the same temp table many times within
one connection, here's how to convert it to reduce and reuse temp table
creations. Raymond Lew has supplied a detailed example for trying this.

Old

open connection
loop until time to go
exec procedure vavoom_often
/* vavoom_often creates and uses #gocart for every call */
/* eg: select * into #gocart from gocart */
go
.
.
.
loop-end
close connection

New

open connection
/* Create the temporary table outside of the sproc */
select * into #gocart from gocart where 1 =2 ;
go
loop until time to go
exec procedure vavoom_often
/* vavoom_often reuses #gocart which */
/* was created before exec of vavoom_often */
/* - First statement may be a truncate table #gocart */
/* - Execute with recompile */
/* if your table will have more than 10 data pages */
/* as the optimizer will assume 10 data pages for temp tables */
go
.
.
.
loop-end
close connection

Note that it is necessary to call out the code to create the table and it
becomes a pain in the butt because the create-table statement will have to be
replicated in any stored proc and in the initialization part of the application
- this can be a maintenance nuisance. This can be solved by using any macro
package such as m4 or cpp. or by using and adapting the scripts from Raymond
Lew.
-------------------------------------------------------------------------------

Brian Black posted a stronger notice than this to the SYBASE-L list, and I
would agree, that any use of select/into in a production environments should
looked at very hard. Even in DSS environments, especially if they share tempdb
with an OLTP environment, should use select/into with care.
-------------------------------------------------------------------------------

From: Raymond Lew

At our company, we try to keep the database and the application loosely coupled
to allow independent changes at the frontend or the backend as long as the
interface stays the same. Embedding temp table definitions in the frontend
would make this more difficult.

To get away from having to embed the temp table definitions in the frontend
code, we are storing the temp table definitions in the database. The frontend
programs retrieve the definitions and declare the tables dynamically at the
beginning of each session. This allows for the change of backend procedures
without changes in the frontend when the API does not change.

Enclosed below are three scripts. The first is an isql script to create the
tables to hold the definitions. The second is a shell script to set up a sample
procedure named vavoom. The third is shell script to demonstrate the structure
of application code.

I would like to thank Charles Forget and Gordon Rees for their assistance on
these scripts.
--start of setup------------------------------------------------------
/* Raymond Lew - 1996-02-20 */
/* This isql script will set up the following tables:
gocart - sample table
app_temp_defn - where temp table definitions are stored
app_temp_defn_group - a logical grouping of temp table definitions
for an application function
*/

/******************************/
/* gocart table - sample table*/
/******************************/
drop table gocart
go
create table gocart
(
cartname char(10) null
,cartcolor char(30) null
)
go
create unique clustered index gocart1 on gocart (cartname)
go
insert into gocart values ('go1','blue ')
insert into gocart values ('go2','pink ')
insert into gocart values ('go3','green ')
insert into gocart values ('go4','red ')
go


/****************************************************************/
/* app_temp_defn - definition of temp tables with their indexes */
/****************************************************************/
drop table app_temp_defn
go
create table app_temp_defn
(
/* note: temp tables are unique only in first 13 chars */
objectname char(20) not null
,seq_no smallint not null
,defntext char(255) not null
)
go
create unique clustered index app_temp_defn1
on app_temp_defn (objectname,seq_no)
go
insert into app_temp_defn
values ('#gocart',1,'select * into #gocart')
insert into app_temp_defn
values ('#gocart',2,' from gocart where 1=2 ')
go
insert into app_temp_defn
values ('#gocartindex',1,
"create unique index gocartindex on #gocart (cartname) ")
go
insert into app_temp_defn
values ('#gocart1',1, 'select * into #gocart1 from gocart where 1=2')
go




/***********************************************************************/
/* app_temp_defn_group - groupings of temp definitions by applications */
/***********************************************************************/
drop table app_temp_defn_group
go
create table app_temp_defn_group
(
appname char(8) not null
,objectname char(20) not null
)
go
create unique clustered index app_temp_defn_group1
on app_temp_defn_group (appname,objectname)
go
insert into app_temp_defn_group values('abc','#gocart')
insert into app_temp_defn_group values('abc','#gocartindex')
go



/***********************************************************/
/* get_temp_defn - proc for getting the temp defn by group */
/***********************************************************/
drop procedure get_temp_defn
go
create procedure get_temp_defn
(
@appname char(8)
)
as

if @appname = ''
select defntext
from app_temp_defn
order by objectname, seq_no
else
select defntext
from app_temp_defn a
, app_temp_defn_group b
where a.objectname = b.objectname
and b.appname = @appname
order by a.objectname, a.seq_no

return
go

/* let's try some tests */
exec get_temp_defn ''
go
exec get_temp_defn 'abc'
go
--end of setup --------------------------------------------------






--- start of make.vavoom --------------------------------------------
#!/bin/sh
# Raymond Lew - 1996-02-20
#
# bourne shell script for creating stored procedures using
# app_temp_defn table
#
# demo procedure vavoom created here
#
# note: you have to change the passwords, id and etc. for your site
# note: you might have to some inline changes to make this work
# check out the notes within the body


# get the table defn's into a text file
#
# note: next line :you will need to end the line immediately after eot \
isql -Ukryten -Pjollyguy -Sstarbug -w255 << eot \
| grep -v '\-\-\-\-' | grep -v 'defntext ' | grep -v ' affected' > tabletext
exec get_temp_defn ''
go
eot
# note: prev line :you will need to have a newline immediately after eot

# go mess around in vi
vi tabletext

#
# create the proc vavoom after running the temp defn's into db
#
isql -Ukryten -Pjollyguy -Sstarbug -e << eot |more
`cat tabletext`
go
drop procedure vavoom
go
create procedure vavoom
(
@color char(10)
)
as
truncate table #gocart1 /* who knows what lurks in temp tables */
if @color = ''
insert #gocart1 select * from gocart
else
insert #gocart1 select * from gocart where cartcolor=@color
select @color '@color', * from #gocart1
return
go
exec vavoom ''
go
exec vavoom 'blue'
go
eot
# note: prev line :you will need to have a newline immediately after eot

exit
# end of unix script
--- end of make.vavoom --------------------------------------------





--- start of defntest.sh -------------------------------------------
#!/bin/sh
# Raymond Lew 1996-02-01
#
# test script: demonstrate with a bourne shell how an application
# would use the temp table definitions stored in the database
#
# note: you must run setup and make.vavoom first
#
# note: you have to change the passwords, id and etc. for your site
# note: you might have to some inline changes to make this work
# check out the notes within the body

# get the table defn's into a text file
#
# note: next line :you will need to end the line immediately after eot \
isql -Ukryten -Pjollyguy -Sstarbug -w255 << eot \
| grep -v '\-\-\-\-' | grep -v 'defntext ' | grep -v ' affected' > tabletext
exec get_temp_defn ''
go
eot
# note: prev line :you will need to have a newline immediately after eot

# go mess around in vi
vi tabletext

isql -Ukryten -Pjollyguy -Sstarbug -e << eot | more
`cat tabletext`
go
exec vavoom ''
go
exec vavoom 'blue'
go
eot
# note: prev line :you will need to have a newline immediately after eot

exit
# end of unix script
--- end of defntest.sh -------------------------------------------

That's all, folks. Have Fun

Back to top
-------------------------------------------------------------------------------

1.5.3: Differences between clustered and non-clustered

-------------------------------------------------------------------------------

Preface

I'd like to talk about the difference between a clustered and a non-clustered
index. The two are very different and it's very important to understand the
difference between the two to in order to know when and how to use each.

I've pondered hard to find the best analogy that I could think of and I've come
up with ... the phone book. Yes, a phone book.

Imagine that each page in our phone book is equivalent to a Sybase 2K data
page. Every time we read a page from our phone book it is equivalent to one
disk I/O.

Since we are imagining, let's also imagine that our mythical SQL Server (that
runs against the phone book) has only enough data cache to buffer 200 phone
pages. When our data cache gets full we have to flush an old page out so we can
read in a new one.

Fasten your seat belts, because here we go...

Clustered Index

A phone book lists everyone by last name. We have an A section, we have a B
section and so forth. Within each section my phone book is clever enough to
list the starting and ending names for the given page.

The phone book is clustered by last name.


create clustered index on phone_book (last_name)

It's fast to perform the following queries on the phone book:

*Find the address of those whose last name is Cisar.
*Find the address of those whose last name is between Even and Fa

Searches that don't work well:

*Find the address of those whose phone number is 440-1300.
*Find the address of those whose prefix is 440

In order to determine the answer to the two above we'd have to search the
entire phone book. We can call that a table scan.

Non-Clustered Index

To help us solve the problem above we can build a non-clustered index.


create nonclustered index on phone_book (phone_number)

Our non-clustered index will be built and maintained by our Mythical SQL Server
as follows:

1. Create a data structure that will house a phone_number and information where
the phone_number exists in the phone book: page number and the row within
the page.

The phone numbers will be kept in ascending order.

2. Scan the entire phone book and add an entry to our data structure above for
each phone number found.
3. For each phone number found, note along side it the page number that it was
located and which row it was in.

any time we insert, update or delete new numbers, our M-SQL Server will
maintain this secondary data structure. It's such a nice Server.

Now when we ask the question:


Find the address of those whose phone number is 440-1300

we don't look at the phone book directly but go to our new data structure and
it tells us which page and row within the page the above phone number can be
found. Neat eh?

Draw backs? Well, yes. Because we probably still can't answer the question:


Find the address of those whose prefix is 440

This is because of the data structure being used to implement non-clustered
indexes. The structure is a list of ordered values (phone numbers) which point
to the actual data in the phone book. This indirectness can lead to trouble
when a range or a match query is issued.

The structure may look like this:
------------------------------------
|Phone Number | Page Number/Row |
====================================
| 440-0000 | 300/23 |
| 440-0001 | 973/45 |
| 440-0002 | 23/2 |
| ... | |
| 440-0030 | 973/45 |
| 440-0031 | 553/23 |
| ... | |
------------------------------------

As one can see, certain phone numbers may map to the same page. This makes
sense, but we need to consider one of our constraints: our Server only has room
for 200 phone pages.

What may happen is that we re-read the same phone page many times. This isn't a
problem if the phone page is in memory. We have limited memory, however, and we
may have to flush our memory to make room for other phone pages. So the
re-reading may actually be a disk I/O.

The Server needs to decide when it's best to do a table scan versus using the
non-clustered index to satisfy mini-range type of queries. The way it decides
this is by applying a heuristic based on the information maintained when an
update statistics is performed.

In summary, non-clustered indexes work really well when used for highly
selective queries and they may work for short, range type of queries.

Suggested Uses

Having suffered many table corruption situations (with 150 SQL servers who
wouldn't? :-)), I'd say always have a clustered index. With a clustered index
you can fish data out around the bad spots on the table thus having minimal
data loss.

When you cluster, build the cluster to satisfy the largest percentage of range
type queries. Don't put the clustered index on your primary key because
typically primary keys are increasing linearly. What happens is that you end up
inserting all new rows at the end of the table thus creating a hot spot on the
last data page.

For detail rows, create the clustered index on the commonly accessed foreign
key. This will aid joins from the master to it.

Use nonclustered index to aid queries where your selection is very selective.
For example, primary keys. :-)

Back to top
-------------------------------------------------------------------------------

1.5.4: Optimistic versus Pessimistic locking?

-------------------------------------------------------------------------------

This is the same problem another poster had ... basically locking a record to
ensure that it hasn't changed underneath ya.

fca...@ix.netcom.com has a pretty nifty solution if you are using ct-lib (I'll
include that below -- hope it's okay Francisco ... :-)) ...

Basically the problem you are facing is one of being a pessimist or an
optimist.

I contend that your business really needs to drive this.

Most businesses (from my experience) can be optimistic.

That is, if you are optimistic that the chances that someone is going to change
something from underneath the end-user is low, then do nothing about it.

On the other hand, if you are pessimistic that someone may change something
underneath the end-user, you can solve it at least as follows:

Solution #1

Use a timestamp on a header table that would be shared by the common data. This
timestamp field is a Sybase datatype and has nothing to do with the current
time. Do not attempt to do any operations on this column other than
comparisons. What you do is when you grab data to present to the end-user, have
the client software also grab the timestamp column value. After some thing
time, if the end-user wishes to update the database, compare the client
timestamp with what's in the database and it it's changed, then you can take
appropriate action: again this is dictated by the business.

Problem #1

If users are sharing tables but columns are not shared, there's no way to
detect this using timestamps because it's not sufficiently granular.

Solution #2 (presented by fcasas)

... Also are you coding to ct-lib directly? If so there's something that you
could have done, or may still be able to do if you are using cursors.

With ct-lib there's a ct_describe function that lets you see key data. This
allows you to implement optimistic locking with cursors and not need
timestamps. Timestamps are nice, but they are changed when any column on a row
changes, while the ct_describe mechanism detects changes at the columns level
for a greater degree of granularity of the change. In other words, the
timestamp granularity is at the row, while ct_describes CS_VERSION_KEY provides
you with granularity at the column level.

Unfortunately this is not well documented and you will have to look at the
training guide and the manuals very closely.

Further if you are using cursors do not make use of the


[for {read only | update [of column_name_list]}]

of the select statement. Omitting this clause will still get you data that can
still be updated and still only place a shared lock on the page. If you use the
read only clause you are acquiring shared locks, but the cursor is not
updatable. However, if you say


update [of ...

will place updated locks on the page, thus causing contention. So, if you are
using cursors don't use the above clause. So, could you answer the following
three questions:

1. Are you using optimistic locking?
2. Are you coding to ct-lib?
3. Are you using cursors?

Problem #2

You need to be coding with ct-lib ...

Solution #3

Do nothing and be optimistic. We do a lot of that in our shop and it's really
not that big of a problem.

Problem #3

Users may clobber each other's changes ... then they'll come looking for you to
clobber you! :-)

Back to top
-------------------------------------------------------------------------------

1.5.5: How do I force an index to be used?

-------------------------------------------------------------------------------

System 11

In System 11, the binding of the internal ordinal value is alleviated so that
instead of using the ordinal index value, the index name can be used instead:
select ... from my_table (index my_first_index)

Sybase 4.x and Sybase System 10

All indexes have an ordinal value assigned to them. For example, the following
query will return the ordinal value of all the indexes on my_table:

select name, indid
from sysindexes
where id = object_id("my_table")

Assuming that we wanted to force the usuage of index numbered three:

select ... from my_table(3)

Note: using a value of zero is equivalent to forcing a table scan. Whilst this
sounds like a daft thing to do, sometimes a table scan is a better solution
than heavy index scanning.

It is essential that all index hints be well documented. This is good DBA
practice. It is especially true for Sybase System 10 and below.

One scheme that I have used that works quite well is to implement a table
similar to sysdepends in the database that contains the index hints.

create table idxdepends
(
tblname varchar(32) not null -- Table being hinted
,depname varchar(50) not null -- Proc, trigger or app that contains hint.
,idxname varchar(32) not null -- Index being hinted at
--,hintcount int null -- You may want to count the number of hints
per proc.
)

Obviously it is a manual process to keep the table populated, but it can save a
lot of trouble later on.

Back to top
-------------------------------------------------------------------------------

1.5.6: Why place tempdb and log on low numbered devices?

-------------------------------------------------------------------------------

System 10 and below.

In System 10 and Sybase 4.X, the I/O scheduler starts at logical device (ldev)
zero and works up the ldev list looking for outstanding I/O's to process.
Taking this into consideration, the following device fragments (disk init)
should be added before any others:

1. tempdb
2. log

Back to top
-------------------------------------------------------------------------------

David Owen

unread,
Jan 24, 2001, 5:14:16 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part10

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

1.5.9: You and showplan output

-------------------------------------------------------------------------------

Microsoft SQL Server includes a very intelligent cost-based query optimizer
which, given an ad-hoc query, can quickly determine the best access method for
retrieving the data, including the order in which to join tables and whether or
not to use indexes that may be on those tables. By using a cost-based query
optimizer, the System Administrator or end user is released from having to
determine the most efficient way of structuring the query to get optimal
performance -- instead, the optimizer looks at all possible join orders, and
the cost of using each index, and picks the plan with the least cost in terms
of page I/O's.

Detailed information on the final access method that the optimizer chooses can
be displayed for the user by executing the Transact-SQL "SET SHOWPLAN ON"
command. This command will show each step that the optimizer uses in joining
tables and which, if any, indexes it chooses to be the least-cost method of
accessing the data. This can be extremely beneficial when analyzing certain
queries to determine if the indexes that have been defined on a table are
actually being considered by the optimizer as useful in getting to the data.
This document will define and explain each of the output messages from
SHOWPLAN, and give example queries and the output from SHOWPLAN to illustrate
the point. The format will be consistent throughout: a heading which
corresponds to the exact text of a SHOWPLAN statement, followed by a
description of what it means, a sample query which generates that particular
message, and the full output from executing the query with the SHOWPLAN option
on. Wherever possible, the queries will use the existing tables and indexes,
unaltered, from the SQL Server "Pubs" sample database.

STEP n

This statement will be included in the SHOWPLAN output for every query, where n
is an integer, beginning with "STEP 1". For some queries, SQL Server cannot
effectively retrieve the results in a single step, and must break the query
plan into several steps. For example, if a query includes a GROUP BY clause,
the query will need to be broken into at least two steps: one step to select
the qualifying rows from the table, and another step to group them. The
following query demonstrates a singlestep query.
Query: SELECT au_lname, au_fname
FROM Authors
WHERE city = "Oakland"

SHOWPLAN: STEP 1
The type of query is SELECT
FROM TABLE
authors
Nested iteration
Table Scan

The type of query is SELECT (into a worktable)

This SHOWPLAN statement indicates that SQL Server needs to insert some of the
query results into an intermediate worktable, and later in the query processing
will then select the values out of that table. This is most often seen with a
query which involves a GROUP BY clause, as the results are first put into a
work table, and then the qualifying rows in the work table are grouped based on
the given column in the GROUP BY clause. The following query returns a list of
all cities and indicates the number of authors that live in each city. The
query plan is composed of two steps: the first step selects the rows into a
worktable, and the second step retrieves the grouped rows from the worktable:
Query: SELECT city, total_authors = count(*)
FROM Authors
GROUP BY city

SHOWPLAN: STEP 1
The type of query is SELECT (into a
worktable)
GROUP BY
Vector Aggregate
FROM TABLE
authors
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
FROM TABLE
Worktable
Nested iteration
Table Scan

The type of query is <query type>

This statement describes the type of query for each step. For most user
queries, the value for <query type> will be SELECT, INSERT, UPDATE, or DELETE.
If SHOWPLAN is turned on while other commands are issued, the <query type> will
reflect the command that was issued. The following examples show various
outputs for different queries/commands:
Query 1: CREATE TABLE Mytab (col1 int)
SHOWPLAN 1: STEP 1
The type of query is TABCREATE

Query 2: INSERT Publishers
VALUES ("9904", "NewPubs", "Seattle", "WA")

SHOWPLAN 2: STEP 1
The type of query is INSERT
The update mode is direct
Table Scan
TO TABLE
publishers

The update mode is deferred

There are two methods or "modes" that SQL Server can use to perform update
operations such as INSERT, DELETE, UPDATE, and SELECT INTO. These methods are
called deferred update and direct update. When the deferred method is used, the
changes are applied to all rows of the table by making log records in the
transaction log to reflect the old and new value of the column(s) being
modified (in the case of UPDATE operations), or the values which will be
inserted or deleted (in the case of INSERT and DELETE, respectively). When all
of the log records have been constructed, the changes are then applied to the
data pages. This method generates more log records than a direct update
(discussed later), but it has the advantage of allowing the execution of
commands which may cascade changes throughout a table. For example, consider a
table which has a column "col1" with a unique index on it, and data values
numbered consecutively from 1 to 100 in that column. Assume an UPDATE statement
is executed to increase the value in each row by 1:
Query 1: UPDATE Mytable
SET col1 = col1 + 1

SHOWPLAN 1: STEP 1
The type of query is UPDATE
The update mode is deferred
FROM TABLE
Mytable
Nested iteration
Table Scan
TO TABLE
Mytable


Consider the consequences of starting at the first row in the table, and
updating each row, through the end of the table. Updating the first row (which
has an initial value of 1) to 2 would cause an error, as the unique index would
be violated since there is already a value of 2 in the table; likewise,
updating the second row (which has an initial value of 2) to 3 would also cause
a unique key violation, as would all rows through the end of the table, except
for the last row. By using deferred updates, this problem is easily avoided.
The log records are first constructed to show what the new values for each row
will be, the existing rows are deleted, and the new values inserted.

Just as with UPDATE commands, INSERT commands may also be deferred for very
similar reasons. Consider the following query (there is no clustered index or
unique index on the "roysched" table):
Query 2: INSERT roysched SELECT * FROM roysched

SHOWPLAN 2: STEP 1
The type of query is INSERT
The update mode is deferred
FROM TABLE
roysched
Nested iteration
Table Scan
TO TABLE
roysched


Since there is no clustered index on the table, the new rows will be added to
the end of the table. The query processor needs to be able to differentiate
between the existing rows that are currently in the table (prior to the INSERT
command) and the rows which will be inserted, so as to not get into a
continuous loop of selecting a row, inserting it at the end of the table,
selecting that row that it just inserted, and re-inserting it again. By using
the deferred method of inserting, the log records can be first be constructed
to show all of the currently-existing values in the table, then SQL Server will
re-read those log records to insert them into the table.

The update mode is direct

Whenever possible, SQL Server will attempt to use the direct method of applying
updates to tables, since it is faster and requires fewer log records to be
generated than the deferred method. Depending on the type of command, one or
more criteria must be met in order for SQL Server to perform the update using
the direct method. Those criteria are:

*INSERT: For the direct update method to be used for INSERT operations,
the table into which the rows are being inserted cannot be a table which is
being read from in the same command. The second query example in the
previous section demonstrates this, where the rows are being inserted into
the same table in which they are being selected from. In addition, if rows
are being inserted into the target table, and one or more of the target
table's columns appear in the WHERE clause of the query then the deferred
method, rather than the direct method, will be used.
*SELECT INTO: When a table is being populated with data by means of a
SELECT INTO command, the direct method will always be used to insert the
new rows.
*DELETE: For the direct update method to be used for DELETE operations,
the query optimizer must be able to determine that either 0 or 1 rows
qualify for the delete. The only means for it to verify this is to check
that there is a unique index on the table, which is qualified in the WHERE
clause of the DELETE command, and the target table is not joined with any
other table(s).
*UPDATE: For the direct update method to be used for UPDATE operations,
the same criteria apply as for DELETE: a unique index must exist such that
the query optimizer can determine that no more than 1 row qualifies for the
update, and the only table in the UPDATE command is the target table to
update. In addition, all columns that are being updated must be datatypes
that are fixed length, rather than variable-length. Note that any column
that allows NULLs is internally stored by SQL Server as a variable-length
datatype column.

Query 1: DELETE
FROM authors
WHERE au_id = "172-32-1176"

SHOWPLAN 1: STEP 1
The type of query is DELETE
The update mode is direct
FROM TABLE
authors
Nested iteration
Using Clustered Index
TO TABLE
authors

Query 2: UPDATE titles
SET type = "popular_comp"
WHERE title_id = "BU2075"

SHOWPLAN 2: STEP 1
The type of query is UPDATE
The update mode is direct
FROM TABLE
titles
Nested iteration
Using Clustered Index
TO TABLE
titles

Query 3: UPDATE titles
SET price = $5.99
WHERE title_id = "BU2075"

SHOWPLAN 3: STEP 1
The type of query is UPDATE
The update mode is deferred
FROM TABLE
titles
Nested iteration
Using Clustered Index
TO TABLE
titles

Note that the only difference between the second and third example queries is
the column of the table which is being updated. In the second query, the direct
update method is used, whereas in the third query, the deferred method is used.
This difference is due to the datatype of the column being updated: the
titles.type column is defined as "char(12) NOT NULL", while the titles.price
column is defined as "money NULL". Since the titles.price column is not a
fixed-length datatype, the direct method cannot be used.

GROUP BY

This statement appears in the SHOWPLAN output for any query that contains a
GROUP BY clause. Queries that contain a GROUP BY clause will always be at least
two-step queries: one step to select the qualifying rows into a worktable and
group them, and another step to return the rows from the worktable. The
following example illustrates this:
Query: SELECT type, AVG(advance),
SUM(ytd_sales)
FROM titles
GROUP BY type

SHOWPLAN: STEP 1
The type of query is SELECT (into a
worktable)
GROUP BY
Vector Aggregate
FROM TABLE
titles
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
FROM TABLE
Worktable
Nested iteration
Table Scan

Scalar Aggregate

Transact-SQL includes the aggregate functions:

*AVG()
*COUNT()
*COUNT(*)
*MAX()
*MIN()
*SUM()

Whenever an aggregate function is used in a SELECT statement that does not
include a GROUP BY clause, it produces a single value, regardless of whether it
is operating on all of the rows in a table or on a subset of the rows defined
by a WHERE clause. When an aggregate function produces a single value, the
function is called a "scalar aggregate", and is listed as such by SHOWPLAN. The
following example shows the use of scalar aggregate functions:
Query: SELECT AVG(advance), SUM(ytd_sales)
FROM titles
WHERE type = "business"

SHOWPLAN: STEP 1
The type of query is SELECT
Scalar Aggregate
FROM TABLE
titles
Nested iteration
Table Scan

STEP 2
The type of query is SELECT
Table Scan


Notice that SHOWPLAN considers this a two-step query, which is very similar to
the SHOWPLAN from the GROUP BY query listed earlier. Since the query contains a
scalar aggregate, which will return a single value, SQL Server keeps internally
a "variable" to store the result of the aggregate function. It can be thought
of as a temporary storage space to keep a running total of the aggregate
function as the qualifying rows from the table are evaluated. After all rows
have been evaluated from the table (Step 1), the final value from the
"variable" is then selected (Step 2) to return the scalar aggregate result.

Vector Aggregate

When a GROUP BY clause is used in a query which also includes an aggregate
function, the aggregate function produces a value for each group. These values
are called "vector aggregates". The "Vector Aggregate" statement from SHOWPLAN
indicates that the query includes a vector aggregate. Below is an example query
and SHOWPLAN which includes a vector aggregate:
Query: SELECT title_id, AVG(qty)
FROM sales
GROUP BY title_id

SHOWPLAN: STEP 1
The type of query is SELECT (into a
worktable)
GROUP BY
Vector Aggregate
FROM TABLE
sales
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
FROM TABLE
Worktable
Nested iteration
Table Scan

FROM TABLE

This SHOWPLAN step indicates the table that the query is reading from. In most
queries, the "FROM TABLE" will be followed on the next line by the name of the
table which is being selected from. In other cases, it may indicate that it is
selecting from a worktable (discussed later). The main importance of examining
the table names after the "FROM TABLE" output is to determine the order in
which the query optimizer is joining the tables. The order of the tables listed
after the "FROM TABLE" statements in the SHOWPLAN output indicate the same
order that the tables were joined; this order may be (and often times is)
different than the order that they are listed in the FROM clause of the query,
or the order that they appear in the WHERE clause of the query. This is because
the query optimizer examines all different join orders for the tables involved,
and picks the join order that will require the least amount of I/O's.
Query: SELECT authors.au_id, au_fname, au_lname
FROM authors, titleauthor, titles
WHERE authors.au_id = titleauthor.au_id
AND titleauthor.title_id = titles.title_id
AND titles.type = "psychology"

SHOWPLAN: STEP 1
The type of query is SELECT
FROM TABLE
titles
Nested iteration
Table Scan
FROM TABLE
titleauthor
Nested iteration
Table Scan
FROM TABLE
authors
Nested iteration
Table Scan

This query illustrates the order in which the SQL Server query optimizer
chooses to join the tables, which is not the order that they were listed in the
FROM clause or the WHERE clause. By examining the order of the "FROM TABLE"
statements, it can be seen that the qualifying rows from the titles table are
first located (using the search clause <titles.type = "psychology">). Those
rows are then joined with the titleauthor table (using the join clause <
titleauthor.title_id = titles.title_id>), and finally the titleauthor table is
joined with the authors table to retrieve the desired columns (using the join
clause <authors.au_id = titleauthor.au_id>).

TO TABLE

When a command is issued which makes or attempts to make a modification to one
or more rows of a table, such as INSERT, DELETE, UPDATE, or SELECT INTO, the
"TO TABLE" statement will show the target table which is being modified. For
some operations which require an intermediate step which inserts rows into a
worktable (discussed later), the "TO TABLE" will indicate that the results are
going to the "Worktable" table, rather than a user table. The following
examples illustrate the use of the "TO TABLE" statement:
Query 1: INSERT sales
VALUES ("8042", "QA973", "7/15/92", 7,
"Net 30", "PC1035")

SHOWPLAN 1: STEP 1
The type of query is INSERT
The update mode is direct
Table Scan
TO TABLE
sales

Query 2: UPDATE publishers
SET city = "Los Angeles"
WHERE pub_id = "1389"

SHOWPLAN 2: STEP 1
The type of query is UPDATE
The update mode is deferred
FROM TABLE
publishers
Nested iteration
Using Clustered Index
TO TABLE
publishers

Notice that the SHOWPLAN for the second query indicates that the publishers
table is used both as the "FROM TABLE" as well as the "TO TABLE". In the case
of UPDATE operations, the optimizer needs to read the table which contains the
row(s) to be updated, resulting in the "FROM TABLE" statement, and then needs
to modify the row(s), resulting in the "TO TABLE" statement.

Worktable

For some types of queries, such as those that require the results to be ordered
or displayed in groups, the SQL Server query optimizer may determine that it is
necessary to create its own temporary worktable. The worktable is used to hold
the intermediate results of the query, at which time the result rows can be
ordered or grouped, and then the final results selected from that worktable.
When all results have been returned, the worktable is automatically dropped.
The worktables are always created in the tempdb database, so it is possible
that the system administrator may have to increase the size of tempdb to
accommodate that queries which require very large worktables. Since the query
optimizer creates these worktables for its own internal use, the names of the
worktables will not be listed in the tempdb..sysobjects table.

Worktables will always need to be used when a query contains a GROUP BY clause.
For queries involving ORDER BY, it is possible that the ordering can be done
without the use of the worktable. If there is a clustered index on the column
(s) in the ORDER BY clause, the optimizer knows that the rows are already
stored in sorted order, so a sort in a worktable is not necessary (although
there are exceptions to this, depending on the sort order which is installed on
the server). Since the data is not stored in sorted order for nonclustered
indexes, the worktable will not be necessary if the cheapest access plan is by
using the nonclustered index. However, if the optimizer determines that
scanning the entire table will require fewer I/Os than using the nonclustered
index, then a worktable will need to be created for the ordering of the
results. The following examples illustrate the use of worktables:
Query 1: SELECT type, AVG(advance), SUM(ytd_sales)
FROM titles
GROUP BY type

SHOWPLAN 1: STEP 1
The type of query is SELECT (into a
worktable)
GROUP BY
Vector Aggregate
FROM TABLE
titles
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
FROM TABLE
Worktable
Nested iteration
Table Scan

Query 2: SELECT *
FROM authors
ORDER BY au_lname, au_fname

SHOWPLAN 2: STEP 1
The type of query is INSERT
The update mode is direct
Worktable created for ORDER BY
FROM TABLE
authors
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
This step involves sorting
FROM TABLE
Worktable
Using GETSORTED
Table Scan

Query 3: SELECT *
FROM authors
ORDER BY au_id

SHOWPLAN 3: STEP 1
The type of query is SELECT
FROM TABLE
authors
Nested iteration
Table Scan

In the third example above, notice that no worktable was created for the ORDER
BY clause. This is because there is a unique clustered index on the
authors.au_id column, so the data is already stored in sorted order based on
the au_id value, and an additional sort for the ORDER BY is not necessary. In
the second example, there is a composite nonclustered index on the columns
au_lname and au_fname. However, since the optimizer chose not to use the index,
and due to the sort order on the SQL Server, a worktable needed to be created
to accommodate the sort.

Worktable created for SELECT_INTO

SQL Server's SELECT INTO operation performs two functions: it first creates a
table with the exact same structure as the table being selected from, and then
it insert all rows which meet the WHERE conditions (if a WHERE clause is used)
of the table being selected from. The "Worktable created for SELECT_INTO"
statement is slightly misleading, in that the "worktable" that it refers to is
actually the new physical table that is created. Unlike other worktables, it is
not dropped when the query finishes executing. In addition, the worktable is
not created in tempdb, unless the user specifies tempdb as the target database
for the new table.
Query: SELECT *
INTO seattle_stores
FROM stores
WHERE city = "seattle"

SHOWPLAN: STEP 1
The type of query is TABCREATE

STEP 2
The type of query is INSERT
The update mode is direct
Worktable created for SELECT_INTO
FROM TABLE
stores
Nested iteration
Table Scan
TO TABLE
Worktable

Worktable created for DISTINCT

When a query is issued which includes the DISTINCT keyword, all duplicate rows
are excluded from the results so that only unique rows are returned. To
accomplish this, SQL Server first creates a worktable to store all of the
results of the query, including duplicates, just as though the DISTINCT keyword
was not included. It then sorts the rows in the worktable, and is able to
easily discard the duplicate rows. Finally, the rows from the worktable are
returned, which insures that no duplicate rows will appear in the output.
Query: SELECT DISTINCT city
FROM authors

SHOWPLAN: STEP 1
The type of query is INSERT
The update mode is direct
Worktable created for DISTINCT
FROM TABLE
authors
FROM TABLE
authors
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
This step involves sorting
FROM TABLE
Worktable
Using GETSORTED
Table Scan

Worktable created for ORDER BY

As discussed previously, queries which include an ORDER BY clause will often
require the use of a temporary worktable. When the optimizer cannot use an
available index for the ordering, it creates a worktable for use in sorting the
result rows prior to returning them. Below is an example which shows the
worktable being created for the ORDER BY clause:
Query: SELECT *
FROM authors
ORDER BY city

SHOWPLAN: STEP 1
The type of query is INSERT
The update mode is direct
Worktable created for ORDER BY
FROM TABLE
authors
FROM TABLE
authors
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
This step involves sorting
FROM TABLE
Worktable
Using GETSORTED
Table Scan

Worktable created for REFORMATTING

When joining tables, SQL Server may in some cases choose to use a "reformatting
strategy" to join the tables and return the qualifying rows. This strategy is
only considered as a last resort, when the tables are large and neither table
in the join has a useful index to use. The reformatting strategy inserts the
rows from the smaller of the two tables into a worktable. Then, a clustered
index is created on the worktable, and the clustered index is then used in the
join to retrieve the qualifying rows from each table. The main cost in using
the reformatting strategy is the time and I/Os necessary to build the clustered
index on the worktable; however, that cost is still cheaper than joining the
tables with no index. If user queries are using the reformatting strategy, it
is generally a good idea to examine the tables involved and create indexes on
the columns of the tables which are being joined. The following example
illustrates the reformatting strategy. Since none of the tables in the Pubs
database are large enough for the optimizer to consider using this strategy,
two new tables are used. Each table has 5 columns defined as "char(200)". Tab1
has 500 rows and Tab2 has 250 rows.
Query: SELECT Tab1.col1
FROM Tab1, Tab2
WHERE Tab1.col1 = Tab2.col1

SHOWPLAN: STEP 1
The type of query is INSERT
The update mode is direct
Worktable created for REFORMATTING
FROM TABLE
Tab2
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
FROM TABLE
Tab1
Nested iteration
Table Scan
FROM TABLE
Worktable
Nested iteration
Using Clustered Index

This step involves sorting

This SHOWPLAN statement indicates that the query must sort the intermediate
results before returning them to the user. Queries that specify DISTINCT will
require an intermediate sort, as well as queries that have an ORDER BY clause
which cannot use an available index. As stated earlier, the results are put
into a worktable, and the worktable is then sorted. The example on the
following page demonstrates a query which requires a sort:
Query: SELECT DISTINCT state
FROM stores

SHOWPLAN: STEP 1
The type of query is INSERT
The update mode is direct
Worktable created for DISTINCT
FROM TABLE
stores
FROM TABLE
stores
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
This step involves sorting
FROM TABLE
Worktable
Using GETSORTED
Table Scan

Using GETSORTED

This statement indicates one of the ways in which the result rows can be
returned from a table. In the case of "Using GETSORTED", the rows will be
returned in sorted order. However, not all queries which return rows in sorted
order will have this step. In the case of a query which has an ORDER BY clause,
and an index with the proper sort sequence exists on those columns being
ordered, an intermediate sort may not be necessary, and the rows can simply be
returned in order by using the available index. The "Using GETSORTED" method is
used when SQL Server must first create a temporary worktable to sort the result
rows, and then return them in the proper sorted order. The following example
shows a query which requires a worktable to be created and the rows returned in
sorted order:
Query: SELECT au_id, au_lname, au_fname, city
FROM authors
ORDER BY city

SHOWPLAN: STEP 1
The type of query is INSERT
The update mode is direct
Worktable created for ORDER BY
FROM TABLE
authors
FROM TABLE
authors
Nested iteration
Table Scan
TO TABLE
Worktable
STEP 2
The type of query is SELECT
This step involves sorting
FROM TABLE
Worktable
Using GETSORTED
Table Scan

Nested iteration

The "Nested iteration" is the default technique used to join tables and/or
return rows from a table. It simply indicates that the optimizer is using one
or more sets of loops to go through a table and retrieve a row, qualify the row
based on the search criteria given in the WHERE clause, return the row to the
front-end, and loop again to get the next row. The method in which it gets the
rows (such as using an available index) is discussed later. The following
example shows the optimizer doing nested iterations through each of the tables
in the join:
Query: SELECT title_id, title
FROM titles, publishers
WHERE titles.pub_id = publishers.pub_id
AND publishers.pub_id = '1389'

SHOWPLAN: STEP 1
The type of query is SELECT
FROM TABLE
publishers
Nested iteration
Using Clustered Index
FROM TABLE
titles
Nested iteration
Table Scan

EXISTS TABLE : nested iteration

This SHOWPLAN step is very similar to the previous one of "Nested iteration".
The difference, however, is that this step indicates a nested iteration on a
table which is part of an existence test in a query. There are several ways an
existence test can be written in Transact-SQL, such as "EXISTS", "IN", or "=
ANY". Prior to SQL Server version 4.2, queries which contained an IN clause
followed by a subquery were treated as table joins. Beginning with version 4.2,
these queries are now treated the same as if they were written with an EXISTS
clause. The following examples demonstrate the SHOWPLAN output with queries
which test for existence of values:
Query 1: SELECT au_lname, au_fname
FROM authors
WHERE EXISTS
(SELECT *
FROM publishers
WHERE authors.city = publishers.city)

SHOWPLAN 1: STEP 1
The type of query is SELECT
FROM TABLE
authors
Nested iteration
Table Scan
FROM TABLE
publishers
EXISTS TABLE : nested iteration
Table Scan

Query 2: SELECT title
FROM titles
WHERE pub_id IN
(SELECT pub_id
FROM publishers
WHERE city LIKE "B%")

SHOWPLAN 2: STEP 1
The type of query is SELECT
FROM TABLE
titles
Nested iteration
Table Scan
FROM TABLE
publishers
EXISTS TABLE : nested iteration
Table Scan

Table Scan

This SHOWPLAN statement indicates which method was used to retrieve the
physical result rows from the given table. When the "table scan" method is
used, the execution begins with the first row in the table; each row is then
retrieved and compared with the conditions in the WHERE clause, and returned to
the front-end if it meets the given criteria. Regardless of how many rows
qualify, every row in the table must be looked at, so for very large tables, a
table scan can be very costly in terms of page I/Os. If a table has one or more
indexes on it, the query optimizer may still choose to do a table scan instead
of using one of the available indexes if the optimizer determines that the
indexes are too costly or are not useful for the given query. The following
query shows a typical table scan:
Query: SELECT au_lname, au_fname
FROM authors

SHOWPLAN: STEP 1
The type of query is SELECT
FROM TABLE
authors
Nested iteration
Table Scan

Using Clustered Index

This SHOWPLAN statement indicates that the query optimizer chose to use the
clustered index on a table to retrieve the rows. Unlike a table scan, using an
index to retrieve rows does not require the optimizer to examine every row in
the table (unless the WHERE clause applies to all rows). For queries which
return a small percentage of the rows from a large table, the savings in terms
of I/Os of using an index versus doing a table scan can be very significant.
The following query shows the clustered index being used to retrieve the rows
from the table:
Query: SELECT title_id, title
FROM titles
WHERE title_id LIKE "PS2%"

SHOWPLAN: STEP 1
The type of query is SELECT
FROM TABLE
titles
Nested iteration
Using Clustered Index

Index : <index name>

Like the previous statement with the clustered index, this statement indicates
that the optimizer chose to use an index to retrieve the rows instead of doing
a table scan. The <index name that follows the "Index :" label will always be
the name of a nonclustered index on the table. Remember that each table can
have no more than one clustered index, but can have up to 249 nonclustered
indexes. The following query illustrates the use of a nonclustered index to
find and return rows. This query uses the sysobjects table in the master
database as an example, rather than a table in Pubs, since using a nonclustered
index on the Pubs tables is generally more costly in terms of I/O than a
straight table scan, due to the fact that most of the tables are only 1 page in
size.
Query: SELECT *
FROM master..sysobjects
WHERE name = "mytable"
AND uid = 5

SHOWPLAN: STEP 1
The type of query is SELECT
FROM TABLE
master..sysobjects
Nested iteration
Index : ncsysobjects

Using Dynamic Index

This SHOWPLAN statement indicates that the query optimizer has chosen to build
its own index during the execution of the query, for use in its "OR strategy".
Since queries involving OR clauses are generally not very efficient in terms of
being able to quickly access the data, the SQL Server optimizer may choose to
use the OR strategy. When the OR strategy is used, the optimizer makes several
passes through the table -- one pass for each argument to each OR clause. The
results of each pass are added to a single worktable, and the worktable is then
sorted to remove any duplicate rows. The worktable does not contain the actual
data rows from the table, but rather it contains the row IDs for the matching
rows. The row IDs are simply a combination of the page number and row number on
that page for each of the rows. When the duplicates have been eliminated, the
optimizer considers the worktable of row IDs to be, essentially, its own index
("Dynamic Index") pointing to the table's data rows. It can then simply scan
through the worktable, get each row ID, and return the data row from the table
that has that row ID.

The OR strategy is not limited only to queries that contain OR clauses. When an
IN clause is used to list a group of possible values, SQL Server interprets
that the same way as though the query had a separate equality clause for each
of the values in the IN clause. To illustrate the OR strategy and the use of
the Dynamic Index, the queries will be based on a table with 10,000 unique data
rows, a unique nonclustered index on column "col1", and a unique nonclustered
index on column "col2".
Query 1: SELECT *
FROM Mytable
WHERE col1 = 355
OR col2 = 732

SHOWPLAN 1: STEP 1
The type of query is SELECT
FROM TABLE
Mytable
Nested iteration
Index : col1_idx
FROM TABLE
Mytable
Nested iteration
Index : col2_idx
FROM TABLE
Mytable
Nested iteration
Using Dynamic Index

Query 2: SELECT *
FROM Mytable
WHERE col1 IN (700, 1503, 311)

SHOWPLAN 2: STEP 1
The type of query is SELECT
FROM TABLE
Mytable
Nested iteration
Index : col1_idx
FROM TABLE
Mytable
Nested iteration
Index : col1_idx
FROM TABLE
Mytable
Nested iteration
Index : col1_idx
FROM TABLE
Mytable
Nested iteration
Using Dynamic Index


SQL Server does not always resort to using the OR strategy for every query that
contains OR clauses. The following conditions must be met before it will choose
to use the OR strategy:

*All columns in the OR clause must belong to the same table.
*If any portion of the OR clause requires a table scan (due to lack of
index or poor selectivity of a given index), then a table scan will be used
for the entire query, rather than the OR strategy.
*The decision to use the OR strategy is made after all indexes and costs
are evaluated. If any other access plan is less costly (in terms of page I/
Os), SQL Server will choose to use the plan with the least cost. In the
examples above, if a straight table scan would result in less page I/Os
than using the OR strategy, then the queries would be processed as a table
scan instead of using the Dynamic Index.

Back to top
-------------------------------------------------------------------------------

1.5.10: Poor man's sp_sysmon

-------------------------------------------------------------------------------

This is needed for System 10 and Sybase 4.9.2 where there is no sp_sysmon
command available.

Fine tune the waitfor for your application. You may need TS Role -- see Q3.1.
use master
go
dbcc traceon(3604)
dbcc monitor ("clear", "all", "on")
waitfor delay "00:01:00"
dbcc monitor ("sample", "all", "on")
dbcc monitor ("select", "all", "on")
dbcc traceon(8399)
select field_name, group_name, value
from sysmonitors
dbcc traceoff(8399)
go
dbcc traceoff(3604)
go

Back to top
-------------------------------------------------------------------------------

1.5.11: View MRU-LRU procedure cache chain

-------------------------------------------------------------------------------

dbcc procbuf gives a listing of the current contents of the procedure cache. By
repeating the process at intervals it is possible to watch procedures moving
down the MRU-LRU chain, and so to see how long procedures remain in cache. The
neat thing about this approach is that you can size your cache according to
what is actually happening, rather than relying on estimates based on
assumptions that may not hold on your site.

To run it:
dbcc traceon(3604)
go
dbcc procbuf
go

If you use sqsh it's a bit easier to grok the output:
dbcc traceon(3604);
dbcc procbuf;|fgrep <pbname>

See Q1.5.7 regarding procedure cache sizing.

Back to top
-------------------------------------------------------------------------------

1.5.12: Improving Text/Image Type Performance

-------------------------------------------------------------------------------

If you know that you are going to be using a text/insert column immediately,
insert the row setting the column to a non-null value.

There's a noticeable performance gain.

Unfortunately, text and image datatypes cannot be passed as parameters to
stored procedures. The address of the text or image location must be created
and returned where it is then manipulated by the calling code. This means that
transactions involving both text and image fields and stored procedures are not
atomic. However, the datatypes can still be declared as not null in the table
definition.

Given this example -
create table key_n_text
(
key int not null,
notes text not null
)

This stored procedure can be used -
create procedure sp_insert_key_n_text
@key int,
@textptr varbinary(16) output
as

/*
** Generate a valid text pointer for WRITETEXT by inserting an
** empty string in the text field.
*/
insert key_n_text
(
key,
notes
)
values
(
@key,
""
)

select @textptr = textptr(notes)
from key_n_text
where key = @key

return 0
go

The return parameter is then used by the calling code to update the text field,
via the dbwritetext() function if using DB-Library for example.

Back to top

David Owen

unread,
Jan 24, 2001, 5:14:16 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part12

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


SQL Fundamentals



6.1.1 Are there alternatives to row at a time processing?
6.1.2 When should I execute an sp_recompile?
6.1.3 What are the different types of locks and what do they mean?
6.1.4 What's the purpose of using holdlock?
6.1.5 What's the difference between an update in place versus a
deferred update? - see Q1.5.9
6.1.6 How do I find the oldest open transaction?
6.1.7 How do I check if log truncation is blocked?
6.1.8 The timestamp datatype
6.1.9 Stored Procedure Recompilation and Reresolution
6.1.10 How do I manipulate binary columns?
6.1.11 Does Sybase support Row Level Locking?
6.1.12 Why do my page locks not get escalated to a table lock after 200
locks?

next prev ASE FAQ
-------------------------------------------------------------------------------

6.1.1: Alternative to row at a time processing

-------------------------------------------------------------------------------

Someone asked how they could speed up their processing. They were batch
updating/inserting gobs of information. Their algorithm was something as
follows:


... In another case I do:
If exists (select record) then
update record
else
insert record

I'm not sure which way is faster or if it makes a difference. I am doing
this for as many as 4000 records at a time (calling a stored procedure 4000
times!). I am interesting in knowing any way to improve this. The parameter
translation alone on the procedure calls takes 40 seconds for 4000 records.
I am using exec in DB-Lib.

Would RPC or CT-Lib be better/faster?

A netter responded stating that it was faster to ditch their algorithm and to
apply a set based strategy:


The way to take your approach is to convert the row at a time processing
(which is more traditional type of thinking) into a batch at a time (which
is more relational type of thinking). Now I'm not trying to insult you to
say that you suck or anything like that, we just need to dial you in to
think in relational terms.

The idea is to do batches (or bundles) of rows rather than processing a
single one at a time.

So let's take your example (since you didn't give exact values [probably
out of kindness to save my eyeballs] I'll use your generic example to
extend what I'm talking about):

Before:
if exists (select record) then
update record
else
insert record

New way:
1. Load all your rows into a table named new_stuff in a separate work
database (call it work_db) and load it using bcp -- no third GL needed.
1. truncate new_stuff and drop all indexes
2. sort your data using UNIX sort and sort it by the clustered columns
3. load it using bcp
4. create clustered index using with sorted_data and any ancillary
non-clustered index.

2. Assuming that your target table is called old_stuff
3. Do the update in a single batch:
begin tran

/* delete any rows in old_stuff which would normally
** would have been updated... we'll insert 'em instead!
** Essentially, treat the update as a delete/insert.
*/

delete old_stuff
from old_stuff,
new_stuff
where old_stuff.key = new_stuff.key

/* insert entire new table: this adds any rows
** that would have been updated before and
** inserts the new rows
*/
insert old_stuff
select * from new_stuff

commit tran



You can do all this without writing 3-GL, using bcp and a shell script.

A word of caution:

Since these inserts/updates are batched orientated you may blow your
log if you attempt to do too many at a time. In order to avoid this use
the set rowcount directive to create bite-size chunks.

Back to top
-------------------------------------------------------------------------------

6.1.2: When should I execute an sp_recompile?

-------------------------------------------------------------------------------

An sp_recompile should be issued any time a new index is added or an update
statistics. Dropping an index will cause an automatic recompile of all objects
that are dependent on the table.

The sp_recompile command simply increments the schemacnt counter for the given
table. All dependent object counter's are checked against this counter and if
they are different the SQL Server recompiles the object.

Back to top
-------------------------------------------------------------------------------

6.1.3: What are the different types of locks?

-------------------------------------------------------------------------------

First of, just to get it out of the way, there is no method to perform row
level locking. If you think you need row level locking, you probably aren't
thinking set based -- see Q6.1.1 for set processing.

The SQL Server uses locking in order to ensure that sanity of your queries.
Without locking there is no way to ensure the integrity of your operation.
Imagine a transaction that debited one account and credited another. If the
transaction didn't lock out readers/writers then someone can potentially see
erroneous data.

Essentially, the SQL Server attempts to use the least intrusive lock possible,
page lock, to satisfy a request. If it reaches around 200 page locks, then it
escalates the lock to a table lock and releases all page locks thus performing
the task more efficiently.

There are three types of locks:

*page locks
*table locks
*demand locks

Page Locks

There are three types of page locks:

*shared
*exclusive
*update

shared

These locks are requested and used by readers of information. More than one
connection can hold a shared lock on a data page.

This allows for multiple readers.

exclusive

The SQL Server uses exclusive locks when data is to be modified. Only one
connection may have an exclusive lock on a given data page. If a table is large
enough and the data is spread sufficiently, more than one connection may update
different data pages of a given table simultaneously.

update

A update lock is placed during a delete or an update while the SQL Server is
hunting for the pages to be altered. While an update lock is in place, there
can be shared locks thus allowing for higher throughput.

The update lock(s) are promoted to exclusive locks once the SQL Server is ready
to perform the delete/update.

Table Locks

There are three types of table locks:

*intent
*shared
*exclusive

intent

Intent locks indicate the intention to acquire a shared or exclusive lock on a
data page. Intent locks are used to prevent other transactions from acquiring
shared or exclusive locks on the given page.

shared

This is similar to a page level shared lock but it affects the entire table.
This lock is typically applied during the creation of a non-clustered index.

exclusive

This is similar to a page level exclusive lock but it affects the entire table.
If an update or delete affects the entire table, an exclusive table lock is
generated. Also, during the creation of a clustered index an exclusive lock is
generated.

Demand Locks

A demand lock prevents further shared locks from being set. The SQL Server sets
a demand lock to indicate that a transaction is next to lock a table or a page.

This avoids indefinite postponement if there was a flurry of readers when a
writer wished to make a change.

Back to top
-------------------------------------------------------------------------------

6.1.4: What's the purpose of using holdlock?

-------------------------------------------------------------------------------

All select/readtext statements acquire shared locks (see Q6.1.3) to retrieve
their information. After the information is retrieved, the shared lock(s) is/
are released.

The holdlock option is used within transactions so that after the select/
readtext statement the locks are held until the end of the transaction:

*commit transaction
*rollback transaction

If the holdlock is not used within a transaction, the shared locks are
released.

Example

Assume we have the following two transactions and that each where-clause
qualifies a single row:

tx #1

begin transaction
/* acquire a shared lock and hold it until we commit */
1: select col_1 from table_a holdlock where id=1
2: update table_b set col_3 = 'fiz' where id=12
commit transaction

tx #2

begin transaction
1: update table_a set col_2 = 'a' where id=1
2: update table_c set col_3 = 'teo' where id=45
commit transaction

If tx#1, line 1 executes prior to tx#2, line 1, tx#2 waits to acquire its
exclusive lock until tx#1 releases the shared level lock on the object. This
will not be done until the commit transaction, thus slowing user throughput.

On the other hand, if tx#1 had not used the holdlock attribute, tx#2 would not
have had to wait until tx#1 committed its transaction. This is because shared
level locks are released immediately (even within transactions) when the
holdlock attribute is not used.

Note that the holdlock attribute does not stop another transaction from
acquiring a shared level lock on the object (i.e. another reader). It only
stops an exclusive level lock (i.e. a writer) from being acquired.

Back to top
-------------------------------------------------------------------------------

6.1.6: How do I find the oldest open transaction?

-------------------------------------------------------------------------------
select h.spid, u.name, p.cmd, h.name, h.starttime,
p.hostname, p.hostprocess, p.program_name
from master..syslogshold h,
master..sysprocesses p,
master..sysusers u
where h.spid = p.spid
and p.suid = u.suid
and h.spid != 0 /* not replication truncation point */

Back to top
-------------------------------------------------------------------------------

6.1.7: How do I check if log truncation is blocked?

-------------------------------------------------------------------------------

System 11 and beyond:
select h.spid, convert(varchar(20), h.name), h.starttime
from master..syslogshold h,
sysindexes i
where h.dbid = db_id()
and h.spid != 0
and i.id = 8 /* syslogs */
and h.page in (i.first, i.first+1) /* first page of log = page of oldest xact */

Back to top
-------------------------------------------------------------------------------

6.1.8: The timestamp datatype

-------------------------------------------------------------------------------

The timestamp datatype is user-defined datatype supplied by Sybase, defined as:


varbinary(8) NULL

It has a special use when used to define a table column. A table may have at
most one column of type timestamp, and whenever a row containing a timestamp
column is inserted or updated the value in the timestamp column is
automatically updated. This much is covered in the documentation.

What isn't covered is what the values placed in timestamp columns actually
represent. It is a common misconception that timestamp values bear some
relation to calendar date and/or clock time. They don't - the datatype is
badly-named. SQL Server keeps a counter that is incremented for every write
operation - you can see its current value via the global variable @@DBTS
(though don't try and use this value to predict what will get inserted into a
timestamp column as every connection shares the same counter.)

The value is maintained between server startups and increases monotonically
over time (though again you cannot rely on it this behaviour). Eventually the
value will wrap, potentially causing huge problems, though you will be warned
before it does - see Sybase Technical News Volume 5, Number 1 (see Q10.3.1).
You cannot convert this value to a datetime value - it is simply an 8-byte
integer.


Note that the global timestamp value is used for recovery purposes in the
event of an RDMBS crash. As transactions are committed to the log each
transaction gets a unique timestamp value. The checkpoint process places a
marker in the log with its unique timestamp value. If the RDBMS crashes,
recovery is the process of looking for transactions that need to be rolled
forward and/or backward from the checkpoint event. If a transaction spans
across the checkpoint event and it never competed it too needs to be rolled
back.

Essentially, this describes the write-ahead log protocol described by C.J.
Date in An Introduction to Database Systems.

So what is it for? It was created in order to support the browse-mode functions
of DB-Library (and for recovery as mentioned above). This enables an
application to easily support optimistic locking (See Q1.5.4) by guaranteeing a
watch column in a row will change value if any other column in that row is
updated. The browse functions checked that the timestamp value was still the
same as when the column was read before attempting an update. This behaviour is
easy to replicate without necessarily using the actual client browse-mode
functions - just read the timestamp value along with other data retrieved to
the client, and compare the stored value with the current value prior to an
update.

Back to top
-------------------------------------------------------------------------------

6.1.9: Stored Procedure Recompilation and Reresolution

-------------------------------------------------------------------------------

When a stored procedure is created, the text is placed in syscomments and a
parse tree is placed in sysprocedures. At this stage there is no compiled query
plan.

A compiled query plan for the procedure only ever exists in memory (that is, in
the procedure cache) and is created under the following conditions:

1. A procedure is executed for the first time.
2. A procedure is executed by a second or subsequent user when the first plan
in cache is still in use.
3. The procedure cache is flushed by server restart or cache LRU flush
procedure.
4. The procedure is executed or created using the with recompile option.

If the objects the procedure refers to change in some way - indexes dropped,
table definition changed, etc - the procedure will be reresolved - which
updates sysprocedures with a modified tree. Before 10.x the tree grows and in
extreme cases the procedure can become too big to execute. This problem
disappears in Sybase System 11. This reresolution will always occur if the
stored procedure uses temporary tables (tables that start with "#").

There is apparently no way of telling if a procedure has been reresolved.

Traceflag 299 offers some relief, see Q1.3.3 for more information regarding
traceflags.

The Official Explanation -- Reresolution and Recompilation Explained

When stored procedures are created, an entry is made in sysprocedures that
contains the query tree for that procedure. This query tree is the resolution
of the procedure and the applicable objects referenced by it. The syscomments
table will contain the actual procedure text. No query plan is kept on disk.
Upon first execution, the query tree is used to create (compile) a query plan
(execution plan) which is stored in the procedure cache, a server memory
structure. Additional query plans will be created in cache upon subsequent
executions of the procedure whenever all existing cached plans are in use. If a
cached plan is available, it will be used.

Recompilation is the process of using the existing query tree from
sysprocedures to create (compile) a new plan in cache. Recompilation can be
triggered by any one of the following:

*First execution of a stored procedure,
*Subsequent executions of the procedure when all existing cached query
plans are in use,
*If the procedure is created with the recompile option, CREATE PROCEDURE
sproc WITH RECOMPILE
*If execution is performed with the recompile option, EXECUTE sproc WITH
RECOMPILE

Re-resolution is the process of updating the query tree in sysprocedures AND
recompiling the query plan in cache. Re-resolution only updates the query tree
by adding the new tree onto the existing sysprocedures entry. This process
causes the procedure to grow in size which will eventually cause an execution
error (Msg 703 - Memory request failed because more than 64 pages are required
to run the query in its present form. The query should be broken up into
shorter queries if possible). Execution of a procedure that has been flagged
for re-resolution will cause the re-resolution to occur. To reduce the size of
a procedure, it must be dropped which will remove the entries from
sysprocedures and syscomments. Then recreate the procedure.

Re-resolution can be triggered by various activities most of which are
controlled by SQL Server, not the procedure owner. One option is available for
the procedure owner to force re-resolution. The system procedure, sp_recompile,
updates the schema count in sysobjects for the table referenced. A DBA usually
will execute this procedure after creating new distribution pages by use of
update statistics. The next execution of procedures that reference the table
flagged by sp_recompile will have a new query tree and query plan created.
Automatic re-resolution is done by SQL Server in the following scenarios:

*Following a LOAD DATABASE on the database containing the procedure,
*After a table used by the procedure is dropped and recreated,
*Following a LOAD DATABASE of a database where a referenced table resides,
*After a database containing a referenced table is dropped and recreated,
*Whenever a rule or default is bound or unbound to a referenced table.

Forcing automatic compression of procedures in System 10 is done with trace
flag 241. System 11 should be doing automatic compression, though this is not
certain.

When are stored procedures compiled?

Stored procedures are in a database as rows in sysprocedures, in the form of
parse trees. They are later compiled into execution plans.

A stored procedures is compiled:

1. with the first EXECute, when the parse tree is read into cache
2. with every EXECute, if CREATE PROCEDURE included WITH RECOMPILE
3. with each EXECute specifying WITH RECOMPILE
4. if the plans in cache for the procedure are all in use by other processes
5. after a LOAD DATABASE, when all procedures in the database are recompiled
6. if a table referenced by the procedure can not be opened (using object id),
when recompilation is done using the table's name
7. after a schema change in any referenced table, including:
1. CREATE INDEX or DROP INDEX to add/delete an index
2. ALTER TABLE to add a new column
3. sp_bindefault or sp_unbindefault to add/delete a default
4. sp_bindrule or sp_unbindrule to add/delete a rule

8. after EXECute sp_recompile on a referenced table, which increments
sysobjects.schema and thus forces re-compilation

What causes re-resolution of a stored procedure?

When a stored procedure references an object that is modified after the
creation of the stored procedure, the stored procedure must be re-resolved.
Re-resolution is the process of verifying the location of referenced objects,
including the object id number. Re-resolution will occur under the following
circumstances:

1. One of the tables used by the stored procedure is dropped and re-created.
2. A rule or default is bound to one of the tables (or unbound).
3. The user runs sp_recompile on one of the tables.
4. The database the stored procedure belongs to is re-loaded.
5. The database that one of the stored procedure's tables is located in is
re-loaded.
6. The database that one of the stored procedure's tables is located in is
dropped and re-created.

What will cause the size of a stored procedure to grow?

Any of the following will result in a stored procedure to grow when it is
recompiled:

1. One of the tables used in the procedure is dropped and re-created.
2. A new rule or default is bound to one of the tables or the user runs
sp_recompile on one of the tables.
3. The database containing the stored procedure is re-loaded.

Other things causing a stored procedure to be re-compiled will not cause it to
grow. For example, dropping an index on one of the tables used in the procedure
or doing EXEC WITH RECOMPILE.

The difference is between simple recompilation and re-resolution. Re-resolution
happens when one of the tables changes in such a way that the query trees
stored in sysprocedures may be invalid. The datatypes, column offsets, object
ids or other parts of the tree may change. In this case, the server must
re-allocate some of the query tree nodes. The old nodes are not de-allocated
(there is no way to do this within a single procedure header), so the procedure
grows. In time, trying to execute the stored procedure will result in a 703
error about exceeding the 64 page limit for a query.

Back to top
-------------------------------------------------------------------------------

6.1.10: How do I manipulate varbinary columns?

-------------------------------------------------------------------------------

The question was posed - How do we manipulate varbinary columns, given that
some portion - like the 5th and 6th bit of the 3rd byte - of a (var)binary
column, needs to be updated? Here is one approach, provided by Bret Halford (
br...@sybase.com), using stored procedures to set or clear certain bits of a
certain byte of a field of a row with a given id:
drop table demo_table
drop procedure clear_bits
drop procedure set_bits
go
create table demo_table (id numeric(18,0) identity, binary_col
binary(20))
go
insert demo_table values (0xffffffffffffffffffffffffffffffffffffffff)
insert demo_table values (0xaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa)
insert demo_table values (0x0000000000000000000000000000000000000000)
go

create procedure clear_bits (
@id numeric(18,0), -- primary key of row to be changed
@bytenum tinyint, -- specifies which byte of binary_col to change
@mask binary(1) -- bits to be cleared are zeroed,
-- bits left alone are turned on
-- so 0xff = clear all, 0xfb = clear bit 3
)
as
update demo_table set binary_col =
substring(binary_col,1,@bytenum-1)+
convert(binary(1),
convert(tinyint,substring(binary_col,@bytenum,1)) &
convert(tinyint,@mask)
)+
substring(binary_col,@bytenum+1,20)
from demo_table
where id = @id
go

create procedure set_bits (
@id numeric(18,0), -- primary key of row to be changed
@bytenum tinyint, -- specifies which byte of binary_col to change
@mask binary(1)) -- bits to be set are turned on
-- bits left alone are zeroed
-- so 0xff = set all, 0xfb = set all but 3
)
as
update demo_table set binary_col =
substring(binary_col,1,@bytenum-1)+
convert(binary(1),
convert(tinyint,substring(binary_col,@bytenum, 1)) |
convert(tinyint,@mask)
)+
substring(binary_col,@bytenum+1,20)
from demo_table
where id = @id
go

select * from demo_table
-- clear bits 2,4,6,8 of byte 1 of row 1
exec clear_bits 1,1,0xAA

-- set bits 1-8 of byte 20 of row 3
exec set_bits 3,20,0xff

-- clear bits 1-8 of byte 4 of row 2
exec clear_bits 2,4,0xff

-- clear bit 3 of byte 5 of row 2
exec clear_bits 2,5,0x08
exec clear_bits 2,6,0x0f
exec set_bits 2,10,0xff
go

select * from demo_table
go

Back to top
-------------------------------------------------------------------------------

6.1.11: Does Sybase support Row Level Locking?

-------------------------------------------------------------------------------

With Adaptive Server Enterprise 11.9 Sybase introduced row level locking into
its product. In fact it went further than that, it introduced 3 different
locking levels:

*All Pages Locking


This is the scheme that is implemented in all servers prior to 11.9. Here
locks are taken out at the page level, which may included many rows. The
name refers to the fact that all of the pages in any data manipulation
statement are locked, both data and index.

*Data Page Locking


The other two locking schemes are bundled together under the title Data
Page Locking, refering to the fact that only data pages are ever locked in
the conventional sense. Data Page Locking is divided into two categories
+Data Only Locking

This locking scheme still locks a page at a time, including all of the
rows contained within that page, but uses a new mechanism, called
latches, to lock index pages for the shortest amount of time. One of
the consequences of this scheme is that it does not update index pages.
In order to support this Sybase has introduced a new concept,
forwarded rows. These are rows that have had to move because they have
grown beyond space allowed for them on the page they were created. 2002
bytes per page.
+Row Level Locking

Just as it sounds, the lock manager only locks the row involved in the
operation.

Back to top
-------------------------------------------------------------------------------

6.1.12: Why do my page locks not get escalated to a table lock after 200 locks?

-------------------------------------------------------------------------------

Several reasons why this may be happening.

*Are you doing the updates from within a cursor?


The lock promotion only happens if you are attempting to take out 200 locks
in a single operation ie a single insert, update or delete. If you
continually loop over a table using a cursor, locking one row at time, the
lock promotion never fires. Either use an explicit mechanism to lock the
whole table, if that is required, or remove the cursor replacing it with an
appropriate join.

*A single operation is failing to escalate?


Even if you are performing a single insert, update or delete, Sybase only
attempts to lock the whole table when the lock escalation point is reached.
If this attempt fails because there is another lock which prevents the
escalation, the attempt is aborted and individual page locking continues.

David Owen

unread,
Jan 24, 2001, 5:14:16 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part11

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

Platform Specific Issues




2.1 How to Start ASE on Remote NT Servers

next prev ASE FAQ
-------------------------------------------------------------------------------

2.1 How to Start ASE on Remote NT Servers

-------------------------------------------------------------------------------

Currently, there is no method of starting ASE on a remote NT server using
Sybase Central. So how do you get ASE running on an NT server located in one
city when you are currently located in another. OK, OK, so flying there is an
option, but let's try to stay within the realms of practicality <g>.

One option is to buy a good telnet server and telnet onto the box and then
start it using the "RUN_<server>.BAT" file. This works, but depending on the
telnet server can be a little troublesome. NT does not have such a nice set of
commands as Unix, so there is no "startserver" to run the server in the
background. This means that the telnet window that you use to start the server
may have to stay open for the lifetime of the server. This means that the
health of ASE is now dependent upon two machines not crashing. As I say, your
mileage may vary, but I have certainly found this to be the case with at least
one telnet server.

Another option is to use SRVMGR.EXE from the Windows NT resource kit. Roughly
you issue

srvmgr \\SERVER-TO-BE-MANAGED

(obviously replacing SERVER-TO-BE-MANAGED with the name of the server you wish
to start ASE on!)

Select the "Services" option, and start ASE as if you were in the "Services"
applet on a local NT server.

Yet another option is to install PC Anywhere or VNC on both machines and use
one of these tools to remotely control the system. (VNC is a very good version
of PC Anywhere, except that the clients and servers run on NT, Unix, Linux; the
source code is available and it is free (in both senses of the word)!)

If anyone knows of any better methods, please let me know and I will add them
to this section. Thanks.
-------------------------------------------------------------------------------

next prev ASE FAQ

DBCC's





3.1 How do I set TS Role in order to run certain DBCCs...?
3.2 What are some of the hidden/trick DBCC commands?
3.3 The unauthorized DBCC list with doco - see Q11.4.1
3.4 Fixing a Munged Log
3.5 Another site with DBCC commands - see Q11.4.2


Performing any of the above may corrupt your SQL Server. Please do not call
Sybase Technical Support after screwing up your ASE/SQL Server. Remember,
always take a dump of the master database and any other databases that are
to be affected.

next prev ASE FAQ
-------------------------------------------------------------------------------

3.1: How to set TS Role

-------------------------------------------------------------------------------

Some DBCC commands require that you set TS Role in order to run them. Here's
how to set it:

Login to Server as sa and perform the following:



sp_role "grant", sybase_ts_role, sa

go
set role "sybase_ts_role" on
go

Back to top
-------------------------------------------------------------------------------

3.2: DBCC Command Reference

-------------------------------------------------------------------------------

If you know of any more DBCC Commands, please mail to syb...@midsomer.org. For
your consumption here they are, use at your own risk:

*allocdump( dbid, page )
*bhash( { print_bufs | no_print }, bucket_limit )
*buffer( [ dbid ][, objid ][, nbufs ], printopt = { 0 | 1 | 2 }, buftype )
*bytes( startaddress, length )
*checkalloc[( dbname [, fix | nofix ] ) ]
*checkcatalog[( dbname )]
*checkdb[( dbname [, skip_ncindex ] ) ]
*checktable( tablename | tabid [, skip_ncindex ] )
*corrupt( tablename, indid, error )
+1133 error demonstrates that a page we think is an oam is not
+2502 error shows multiple references to the same page
+2503 error shows a breakage in the page linkage
+2521 error shows that the page is referenced but is not allocated on
the extent page
+2523 error shows that the page number in the page or catalog entries
are out-of-range for the database
+2525 error shows that an extent objid/indid do not match what is on
the page
+2529 error shows a page number out-of-range for the database or a 605
style scenario
+2540 error occurs when a page is allocated on an extent but the page
is not referenced in the page chain
+2546 error occurs when an extent is found for an object without an of
its pages being referenced (a stranded extent)
+7939 error occurs when an allocation page which has extents for an
object are not reflected on the OAM page
+7940 error occurs when the total counts in the OAM page differ from
the actual count of pages in the chain
+7949 error is similar to a 7940 except that the counts are on an
allocation page basis

*cursorinfo(cursor_level, cursor_name) where
+cursor_level - level of nesting. -1 is all nesting levels

*dbinfo( [ dbname ] )
*dbrepair( dbid, option = { dropdb | fixindex | fixsysindex }, table,
indexid )
*dbrepair( dbid, ltmignore)
*dbtable( dbid )
*delete_row( dbid, pageid, delete_by_row = { 1 | 0 }, rownum )
*des( [ dbid ][, objid ] )
*engine(eng func) where eng func may be:
+"online"
+"offline"

*extentcheck( dbid, objid, indexid, sort = {1|0} )
*extentdump( dbid, page )
*extentzap( dbid, objid, indexid, sort )
*findnotfullextents( dbid, objid, indexid, sort = { 1 | 0 } )
*fix_al( [ dbname ] )
*help( dbcc_command )
*ind( dbid, objid, printopt = { 0 | 1 | 2 } )
*indexalloc(tablename|tabid, indid, [full | optimized | fast],[fix |
nofix])
*listoam(dbid, table_id, indid) - may supply dbname/tablename rather than
id
*locateindexpgs( dbid, objid, page, indexid, level )
*lock - print out lock chains
*log( [dbid][,objid][,page][,row][,nrecords][,type={-1..36}],printopt={0|
1} )
*memusage
*netmemshow( option = {1 | 2 | 3} )
*netmemusage
*newalloc( dbname, option = { 1 | 2 | 3 } )
*page( dbid, pagenum [, printopt={0|1|2} ][, cache={0|1} ][, logical={1|0}
] )
*pglinkage( dbid, start, number, printopt={0|1|2}, target, order={1|0} )
*pktmemshow( option = {spid} )
*procbuf( dbid, objid, nbufs, printopt = { 0 | 1 } )
*prtipage( dbid, objid, indexid, indexpage )
*pss( suid, spid, printopt = { 1 | 0 } )
*rebuildextents( dbid, objid, indexid )
*rebuild_log( dbid, 1, 1) - careful as this will cause large jumps in your
timestamp values used by log recovery.
*resource
*setkeepalive(# minutes) - for use on Novell with TCP/IP.
*settrunc('ltm','ignore') - this command may be usefull for a dba who is
dumping and loading a database that has replication set on for the original
db.
*show_bucket( dbid, pageid, lookup_type )
*tab( dbid, objid, printopt = { 0 | 1 | 2 } )
*tablealloc(tablename|tabid, [full | optimized | fast],[fix | nofix])
*traceoff( tracenum [, tracenum ... ] )
*traceon( tracenum [, tracenum ... ] )
*tune( option, value ) - switch on any option immediately without having
to reboot the SQL Server. Switches correspond to the old (pre-System 11)
buildmaster -yall minus the c prefix

For example option may be:
+indextrips
+oamtrips
+datatrips
+schedspins
+bufwashsize
+sortbufsize
+sortpgcount
+maxscheds
+max_retries

*undo( dbid, pageno, rowno )

Back to top
-------------------------------------------------------------------------------

3.4: Fixing a Munged Log

-------------------------------------------------------------------------------


Sybase Technical Support states that this is extremely dangerous as it
"jacks up the value of the timestamp" which is used for recovery purposes.
This may cause potential database corruption if the system fails while the
timestamp rolls over.

In 4.9.2, you could only run the dbcc rebuild_log command once and after
that you would have to use bcp to rebuild the database

In System 10, you can run this command about 10 times.

In System 11 I (Pablo, previous editor) tried it about 20 times and no
problem.

1> use master
2> go
1> select count(*) from your_database..syslogs
2> go

-----------
some number

1> sp_configure "allow updates",1
2> go
1> reconfigure with override /* for system 10 and below */
2> go

1> begin tran
2> go

/* Save the following status to be used later... */
1> select saved_status=status from sysdatabases where name = "your_database"
2> go
1> update sysdatabases set status = -32768 where name = "your_database"
2> go
1> commit tran
2> go
1> shutdown
2> go

1> dbcc rebuild_log (your_database, 0, 0)
2> go
DB-LIBRARY error (severity 9):
Unexpected EOF from SQL Server.

1> dbcc rebuild_log (your_database, 1, 1)
2> go
DBCC execution completed. If DBCC printed error messages, see your System
Administrator.


1> use your_database
2> go
1> select count(*) from syslogs
2> go

-----------
1

1> begin tran
2> go
1> update sysdatabases set status = saved_status where name = "your_database"
2> go
(1 row affected)
1> commit tran
2> go
1> shutdown
2> go

Back to top
-------------------------------------------------------------------------------

next prev ASE FAQ

isql



4.1 How do I hide my password using isql?
4.2 How do I remove row affected and/or dashes when using isql?
4.3 How do I pipe the output of one isql to another?

next prev ASE FAQ
-------------------------------------------------------------------------------

4.1: Hiding your password to isql

-------------------------------------------------------------------------------

Here are a menagerie (I've always wanted to use that word) of different methods
to hide your password. Pick and choose whichever fits your environment best:

Single SQL Server on host

Script #1

Assuming that you are using bourne shell sh(1) as your scripting language you
can put the password in a file and substitute the file where the password is
needed.
#!/bin/sh

# invoke say ISQL or something...
(cat $HOME/dba/password_file
cat << EOD
dbcc ...
go
EOD ) | $SYBASE/bin/isql -Usa -w1000

Script #2

#!/bin/sh
umask 077
cat <<-endOfCat | isql -Umyuserid -Smyserver
mypassword
use mydb
go
sp_who
go
endOfCat

Script #3

#!/bin/sh
umask 077
cat <<-endOfCat | isql -Umyuserid -Smyserver
`myScriptForGeneratingPasswords myServer`
use mydb
go
sp_who
go
endOfCat

Script #3


#!/bin/sh
umask 077
isql -Umyuserid -Smyserver <<-endOfIsql
mypassword
use mydb
go
sp_who
go
endOfIsql

Script #4


#!/bin/sh
umask 077
isql -Umyuserid -Smyserver <<-endOfIsql
`myScriptForGeneratingPasswords myServer`
use mydb
go
sp_who
go
endOfIsql

Script #5


#!/bin/sh
echo 'mypassword
use mydb
go
sp_who
go' | isql -Umyuserid -Smyserver

Script #6


#!/bin/sh
echo "`myScriptForGeneratingPasswords myServer`
use mydb
go
sp_who
go" | isql -Umyuserid -Smyserver

Script #7

#!/bin/sh
echo "Password :\c "
stty -echo
read PASSWD
stty echo

echo "$PASSWD
waitfor delay '0:1:00'
go
" | $SYBASE/bin/isql -Usa -S${DSQUERY}

Multiple SQL Servers on host

Again, assuming that you are using bourne shell as your scripting language, you
can do the following:

1. Create a global file. This file will contain passwords, generic functions,
master device for the respective DSQUERY.
2. In the actual scripts, source in the global file.

Global File

SYBASE=/usr/sybase

my_password()
{
case $1 in
SERVER_1) PASSWD="this";;
SERVER_2) PASSWD="is";;
SERVER_3) PASSWD="bogus;;
*) return 1;;
esac

return 0
}

Generic Script

#!/bin/sh -a

#
# Use "-a" for auto-export of variables
#

# "dot" the file - equivalent to csh() "source" command
. $HOME/dba/global_file

DSQUERY=$1

# Determine the password: sets PASSWD
my_password $DSQUERY
if [ $? -ne 0 ] ; then # error!
echo "<do some error catching>"
exit 1
fi

# invoke say ISQL or something...
echo "$PASSWD
dbcc ...
go" | $SYBASE/bin/isql -U sa -S $DSQUERY -w1000

Back to top
-------------------------------------------------------------------------------

4.2: How to remove row affected and dashes

-------------------------------------------------------------------------------

If you pipe the output of isql then you can use sed(1) to remove this
extraneous output:

echo "$PASSWD
sp_who
go" | isql -U sa -S MY_SERVER | sed -e '/affected/d' -e '/---/d'

If you simply wish to eliminate the row affected line use the set nocount on
switch.

Back to top
-------------------------------------------------------------------------------

4.3: How do I pipe the output of one isql to another?

-------------------------------------------------------------------------------

The following example queries sysdatabases and takes each database name and
creates a string of the sort sp_helpdb dbname and sends the results to another
isql. This is accomplished using bourne shell sh(1) and sed(1) to strip
unwanted output (see Q4.2):

#!/bin/sh

PASSWD=yuk
DSQUERY=GNARLY_HAIRBALL

echo "$PASSWD print \"$PASSWD\"
go
select 'sp_helpdb ' + name + char(10) + 'go'
from sysdatabases
go" | isql -U sa -S $DSQUERY -w 1000 | \
sed -e '/affected/d' -e '/---/d' -e '/Password:/d' | \
isql -U sa -S $DSQUERY -w 1000

To help you understand this you may wish to comment out any series of pipes and
see what output is being generated.

Back to top
-------------------------------------------------------------------------------

next prev ASE FAQ

bcp



5.1 How do I bcp null dates?
5.2 Can I use a named pipe to bcp/dump data out or in?
5.3 How do I exclude a column?

next prev ASE FAQ
-------------------------------------------------------------------------------

5.1: How do I bcp null dates?

-------------------------------------------------------------------------------

As long as there is nothing between the field delimiters in your data, a null
will be entered. If there's a space, the value will be Jan 1, 1900.

You can use sed(1) to squeeze blanks out of fields:
sed -e 's/|[ ]*|/||/g' old_file > new_file

Back to top
-------------------------------------------------------------------------------

5.2: Can I use a named pipe to bcp/dump data out or in?

-------------------------------------------------------------------------------

System 10 and above.

If you would like to bcp copy from one table to a named pipe and compress:

1. %mknod bcp.pipe p
2. %compress sysobjects.Z &
3. %bcp master..sysobjects out bcp.pipe -c -U .. > bcp.pipe
4. Use ps(1) to determine when the compress finishes.

To bcp from my1db..dummy_table_1 to my2db..dummy_table_2:

1. %mknod bcp.pipe p
2. %bcp my2db..dummy_table_2 in bcp.pipe -c -U .. &

To avoid confusion between the above bcp and the next, you may choose
to either use a separate window or redirect the output to a file.

3. %bcp my1db..dummy_table_1 out bcp.pipe -c -U ..

Back to top
-------------------------------------------------------------------------------

5.3: How do I exclude a column?

-------------------------------------------------------------------------------

Open/Client 11.1.1

Create a view based on the table that you want to exclude a column from and
then bcp out from the view.

Open/Client Versions Older Than 11.1.1

The documentation Utility programs for Unix describes the use of format files,
including the field Server Column Order. Server Column Order must equal the
colid of the column, or 0 if the host file field will not be loaded into any
table column.

I don't know if anyone has got this feature to work. So, here is another way of
removing the column. In your example, you want to remove the last column. I am
going to include another example to remove the second column and include a
fourth column. Why? Because it is harder. First example will deal with removing
the last column.

Removing the Last Column

Edit your bcpout.fmt file and look for the changes I made below. Using the
following bcpout.fmt file to dump the data:

--- bcpout.fmt
10.0
2 <------------------ Changed number of columns to BCP to two
1 SYBINT4 0 4 "<**>" 1 counter
2 SYBCHAR 1 512 "\n" 2 text1 <--- Replaced <**> with \n
3 SYBCHAR 1 512 "\n" 3 text2 <--- DELETE THIS LINE

Now recreate the table with the last column removed and use the same bcpout.fmt
file to BCP back in the data.

Now let's try removing the second column out four columns on a table.

Removing the Second out of Four Columns

Edit the bcpout.fmt file and look for the changes I made below. Using the
following bcpout.fmt file to dump the data:

--- bcpout.fmt
10.0
3 <------------------ Changed number of columns to BCP to three
1 SYBINT4 0 4 "<**>" 1 counter
2 SYBCHAR 1 512 "<**>" 2 text1 <--- DELETE THIS LINE
2 SYBCHAR 1 512 "<**>" 3 text2 <--- Changed number items to 2
3 SYBCHAR 1 512 "\n" 4 text3 <--- Changed number items to 3

Including the Fourth Column

Now copy the bcpout.fmt to bcpin.fmt, recreate table with col 2 removed, and
edit bcpin.fmt file:

--- bcpin.fmt
10.0
3
1 SYBINT4 0 4 "<**>" 1 counter
2 SYBCHAR 1 512 "<**>" 2 text2 <-- Changed column id to 2
3 SYBCHAR 1 512 "\n" 3 text3 <-- Changed column id to 3
-------------------------------------------------------------------------------

Back to top

next prev ASE FAQ

David Owen

unread,
Jan 24, 2001, 5:14:15 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part9

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

1.5.7: How much memory to configure?

-------------------------------------------------------------------------------

System 10 and below.

Overview

At some point you'll wonder if your SQL Server has been configured with
sufficient memory. We hope that it's not during some crisis but that's probably
when it'll happen.

The most important thing in setting up memory for a SQL Server is that it has
to be large enough to accommodate:

*concurrent user connections
*active procedures
*and concurrent open databases.

By not setting the SQL Server up correctly it will affect the performance of
it. A delicate balance needs to be struck where your SQL Server is large enough
to accommodate the users but not too large where it adversely affects the CPU
Server (such as causing swapping).

Assumptions made of the reader:

*The reader has some experience administering SQL Servers.
*All queries have been tuned and that there are no unnecessary table
scans.

Preface

As the SQL Server starts up, it pre-allocates its structures to support the
configuration. The memory that remains after the pre-allocation phase is the
available cache.

The available cache is partitioned into two pieces:

1. buffer cache - data pages to be sent to a user connection or flushed to
disk.
2. procedure cache - where query plans live.

The idea is to determine if the buffer cache and the procedure cache are of
adequate size. As a DBA you can use dbcc memusage to ascertain this.

The information provided from a dbcc memusage, daunting at first, but taken in
sections, is easy to understand and provides the DBA with the vital information
that is necessary to determine if more memory is required and where it is
required.

If the procedure cache is too small, user connections will get sporadic 701's:


There is insufficient system memory to run this query.

If the buffer cache is too small, response time may be poor or spiky.

The following text describes how to interpret the output of dbcc memusage and
to correlate this back to the fundamental question:


Does my SQL Server have enough memory?

Definitions

Before delving into the world of dbcc memusage some definitions to get us
through.

Buffer Cache (also referred to as the Data Cache)
Area of memory where SQL Server stores the most recently used data pages
and index pages in 2K page units. If SQL Server finds a data page or index
page in the buffer cache, it doesn't need to perform a physical I/O (it is
reported as a logical I/O). If a user connection selects data from a
database, the SQL Server loads the 2K data page(s) here and then hands the
information off to the user connection. If a user connection updates data,
these pages are altered, and then they are flushed out to disk by the SQL
Server.

This is a bit simplistic but it'll do. Read on for more info though.

The cache is maintained as a doubly linked list. The head of the list
is where the most recently used pages are placed. Naturally towards the
tail of the chain are the least recently used pages. If a page is
requested and it is found on the chain, it is moved back to the front
of the chain and the information is relayed, thus saving a physical I/
O.

But wait! this recycling is not done forever. When a checkpoint occurs
any dirty pages are flushed. Also, the parameter cbufwashsize
determines how many times a page containing data can be recycled before
it has to be flushed out to disk. For OAM and index pages the following
parameters apply coamtrips and cindextrips respectively.

Procedure Cache
Area of memory where SQL Server stores the most recently used query plans
of stored procedures and triggers. This procedure cache is also used by the
Server when a procedure is being created and when a query is being
compiled. Just like the buffer cache, if SQL Server finds a procedure or a
compilation already in this cache, it doesn't need to read it from the
disk.

The size of procedure cache is determined by the percentage of remaining
memory configured for this Server parameter after SQL Server memory needs
are met.

Available Cache

When the SQL Server starts up it pre-allocates its data structures to support
the current configuration. For example, based on the number of user connections
, additional netmem, open databases and so forth the dataserver pre-allocates
how much memory it requires to support these configured items.

What remains after the pre-allocation is the available cache. The available
cache is divided into buffer cache and procedure cache. The sp_configure
"procedure cache" parameter determines the percentage breakdown. A value of 20
would read as follows:


20% of the available cache is dedicated to the procedure cache and 80% is
dedicated to the buffer cache.

Your pal: dbcc memusage


dbcc memusage takes a snapshot of your SQL Server's current memory usage and
reports this vital information back to you. The information returned provides
information regarding the use of your procedure cache and how much of the
buffer cache you are currently using.

An important piece of information is the size of the largest query plan. We'll
talk about that more below.

It is best to run dbcc memusage after your SQL Server has reached a working
set. For example, at the end of the day or during lunch time.


Running dbcc memusage will freeze the dataserver while it does its work.
The more memory you have configured for the SQL Server the longer it'll
take. Our experience is that for a SQL Server with 300MB it'll take about
four minutes to execute. During this time, nothing else will execute: no
user queries, no sp_who's...

In order to run dbcc memusage you must have sa privileges. Here's a sample
execution for discussion purposes:
1> /* send the output to the screen instead of errorlog */
2> dbcc traceon(3604)
3> go
1> dbcc memusage
2> go
Memory Usage:

Meg. 2K Blks Bytes

Configured Memory:300.0000 153600 314572800

Code size: 2.6375 1351 2765600
Kernel Structures: 77.6262 39745 81396975
Server Structures: 54.4032 27855 57045920
Page Cache:129.5992 66355 135894640
Proc Buffers: 1.1571 593 1213340
Proc Headers: 25.0840 12843 26302464

Number of page buffers: 63856
Number of proc buffers: 15964

Buffer Cache, Top 20:

DB Id Object Id Index Id 2K Buffers

6 927446498 0 9424
6 507969006 0 7799
6 959446612 0 7563
6 116351649 0 7428
6 2135014687 5 2972
6 607445358 0 2780
6 507969006 2 2334
6 2135014687 0 2047
6 506589013 0 1766
6 1022066847 0 1160
6 116351649 255 987
6 927446498 8 897
6 927446498 10 733
6 959446612 7 722
6 506589013 1 687
6 971918604 0 686
6 116351649 6 387

Procedure Cache, Top 20:

Database Id: 6
Object Id: 1652357121
Object Name: lp_cm_case_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 16
Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages
----
Database Id: 6
Object Id: 1668357178
Object Name: lp_cm_subcase_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 10
Size of plans: 0.202827 Mb, 212680.000000 bytes, 110 pages
----
Database Id: 6
Object Id: 132351706
Object Name: csp_get_case
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 9
Size of plans: 0.149792 Mb, 157068.000000 bytes, 81 pages
----
Database Id: 6
Object Id: 1858261845
Object Name: lp_get_last_caller_new
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 2
Size of plans: 0.054710 Mb, 57368.000000 bytes, 30 pages
...

1> /* redirect output back to the errorlog */
2> dbcc traceoff(3604)
3> go

Dissecting memusage output

The output may appear overwhelming but it's actually pretty easy to parse.
Let's look at each section.

Memory Usage

This section provides a breakdown of the memory configured for the SQL Server.
Memory Usage:

Meg. 2K Blks Bytes

Configured Memory:300.0000 153600 314572800

Code size: 2.6375 1351 2765600
Kernel Structures: 77.6262 39745 81396975
Server Structures: 54.4032 27855 57045920
Page Cache:129.5992 66355 135894640
Proc Buffers: 1.1571 593 1213340
Proc Headers: 25.0840 12843 26302464

Number of page buffers: 63856
Number of proc buffers: 15964


The Configured Memory does not equal the sum of the individual components.
It does in the sybooks example but in practice it doesn't always. This is
not critical and it is simply being noted here.

The Kernel Structures and Server structures are of mild interest. They can be
used to cross-check that the pre-allocation is what you believe it to be. The
salient line items are Number of page buffers and Number of proc buffers.

The Number of proc buffers translates directly to the number of 2K pages
available for the procedure cache.

The Number of page buffers is the number of 2K pages available for the buffer
cache.

As a side note and not trying to muddle things, these last two pieces of
information can also be obtained from the errorlog:

... Number of buffers in buffer cache: 63856.
... Number of proc buffers allocated: 15964.

In our example, we have 15,964 2K pages (~32MB) for the procedure cache and
63,856 2K pages (~126MB) for the buffer cache.

Buffer Cache

The buffer cache contains the data pages that the SQL Server will be either
flushing to disk or transmitting to a user connection.

If this area is too small, the SQL Server must flush 2K pages sooner than might
be necessary to satisfy a user connection's request.

For example, in most database applications there are small edit tables that are
used frequently by the application. These tables will populate the buffer cache
and normally will remain resident during the entire life of the SQL Server.
This is good because a user connection may request validation and the SQL
Server will find the data page(s) resident in memory. If however there is
insufficient memory configured, then these small tables will be flushed out of
the buffer cache in order to satisfy another query. The next time a validation
is requested, the tables will have to be re-read from disk in order to satisfy
the request. Your performance will degrade.

Memory access is easily an order of magnitude faster than performing a physical
I/O.

In this example we know from the previous section that we have 63,856 2K pages
(or buffers) available in the buffer cache. The question to answer is, "do we
have sufficient buffer cache configured?"

The following is the output of the dbcc memusage regarding the buffer cache:
Buffer Cache, Top 20:

DB Id Object Id Index Id 2K Buffers

6 927446498 0 9424
6 507969006 0 7799
6 959446612 0 7563
6 116351649 0 7428
6 2135014687 5 2972
6 607445358 0 2780
6 507969006 2 2334
6 2135014687 0 2047
6 506589013 0 1766
6 1022066847 0 1160
6 116351649 255 987
6 927446498 8 897
6 927446498 10 733
6 959446612 7 722
6 506589013 1 687
6 971918604 0 686
6 116351649 6 387
Index Legend
+-------+---------------------+
+-------+---------------------+
| Value | Definition |
+-------+---------------------+
| 0 | Table data |
+-------+---------------------+
| 1 | Clustered index |
+-------+---------------------+
| 2-250 | Nonclustered |
| | indexes |
+-------+---------------------+
| 255 | Text pages |
+-------+---------------------+

*To translate the DB Id use select db_name(#) to map back to the database
name.
*To translate the Object Id, use the respective database and use the
select object_name(#) command.

It's obvious that the first 10 items take up the largest portion of the buffer
cache. Sum these values and compare the result to the amount of buffer cache
configured.

Summing the 10 items nets a result of 45,263 2K data pages. Comparing that to
the number of pages configured, 63,856, we see that this SQL Server has
sufficient memory configured.

When do I need more Buffer Cache?

I follow the following rules of thumb to determine when I need more buffer
cache:

*If the sum of all the entries reported is equal to the number of pages
configured and all entries are relatively the same size. Crank it up.
*Note the natural groupings that occur in the example. If the difference
between any of the groups is greater than an order of magnitude I'd be
suspicious. But only if the sum of the larger groups is very close to the
number of pages configured.

Procedure Cache

If the procedure cache is not of sufficient size you may get sporadic 701
errors:


There is insufficient system memory to run this query.

In order to calculate the correct procedure cache one needs to apply the
following formula (found in SQL Server Troubleshooting Guide - Chapter 2,
Procedure Cache Sizing):


proc cache size = max(# of concurrent users) * (size of the largest plan) *
1.25

The flaw with the above formula is that if 10% of the users are
executing the largest plan, then you'll overshoot. If you have distinct
classes of connections whose largest plans are mutually exclusive then
you need to account for that:

ttl proc cache = proc cache size * x% + proc cache size * y% ...

The max(# of concurrent users) is not the number of user connections configured
but rather the actual number of connections during the peak period.

To compute the size of the largest [query] plan take the results from the dbcc
memusage's, Procedure Cache section and apply the following formula:


query plan size = [size of plans in bytes] / [number of plans]

We can compute the size of the query plan for lp_cm_case_list by using the
output of the dbcc memusage:
...
Database Id: 6
Object Id: 1652357121
Object Name: lp_cm_case_list
Version: 1
Uid: 1
Type: stored procedure
Number of trees: 0
Size of trees: 0.000000 Mb, 0.000000 bytes, 0 pages
Number of plans: 16
Size of plans: 0.323364 Mb, 339072.000000 bytes, 176 pages
----
...

Entering the respective numbers, the query plan size for lp_cm_case_list is
21K:


query plan size = 339072 / 16
query plan size = 21192 bytes or 21K

The formula would be applied to all objects found in the procedure cache and
the largest value would be plugged into the procedure cache size formula:
Query Plan Sizes
+------------------------+-------+
+------------------------+-------+
| Object | Query |
| | Plan |
| | Size |
+------------------------+-------+
| lp_cm_case_list | 21K |
+------------------------+-------+
| lp_cm_subcase_list | 21K |
+------------------------+-------+
| csp_get_case | 19K |
+------------------------+-------+
| lp_get_last_caller_new | 28K |
+------------------------+-------+

The size of the largest [query] plan is 28K.

Entering these values into the formula:


proc cache size = max(# of concurrent users) * (size of the largest plan) *
1.25
proc cache size = 491 connections * 28K * 1.25
proc cache size = 17,185 2K pages required

Our example SQL Server has 15,964 2K pages configured but 17,185 2K pages are
required. This SQL Server can benefit by having more procedure cache
configured.

This can be done one of two ways:

1. If you have some headroom in your buffer cache, then sp_configure "procedure
cache" to increase the ratio of procedure cache to buffer cache or

procedure cache =
[ proposed procedure cache ] /
( [ current procedure cache ] + [ current buffer cache ] )

The new procedure cache would be 22%:

procedure cache = 17,185 / ( 15,964 + 63,856 )
procedure cache = .2152 or 22%

2. If the buffer cache cannot be shrunken, then sp_configure "memory" to
increase the total memory:

mem size =
([ proposed procedure cache ]) /
([ current procedure cache ] / [ current configured memory ])

The new memory size would be 165,399 2K pages, assuming that the
procedure cache is unchanged:

mem size = 17,185 / ( 15,964 / 153,600 )
mem size = 165,399 2K pages

Back to top
-------------------------------------------------------------------------------

1.5.8: Why should I use stored procedures?

-------------------------------------------------------------------------------

There are many advantages to using stored procedures (unfortunately they do not
handle the text/image types):

*Security - you can revoke access to the base tables and only allow users
to access and manipulate the data via the stored procedures.
*Performance - stored procedures are parsed and a query plan is compiled.
This information is stored in the system tables and it only has to be done
once.
*Network - if you have users who are on a WAN (slow connection) having
stored procedures will improve throughput because less bytes need to flow
down the wire from the client to the SQL server.
*Tuning - if you have all your SQL code housed in the database, then it's
easy to tune the stored procedure without affecting the clients (unless of
course the parameters change).
*Modularity - during application development, the application designer can
concentrate on the front-end and the DB designer can concentrate on the SQL
Server.

Back to top
-------------------------------------------------------------------------------

David Owen

unread,
Jan 24, 2001, 5:14:17 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part15

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


Open Client



7.1 What is Open Client?
7.2 What is the difference between DB-lib and CT-lib?
7.3 What is this TDS protocol?
7.4 I have upgraded to MS SQL Server 7.0 and can no longer connect from
Sybase's isql.
7.5 The Basics of Connecting to Sybase
7.6 Connecting to Sybase using ODBC

next prev ASE FAQ
-------------------------------------------------------------------------------

7.1: What is Open Client?

-------------------------------------------------------------------------------

Open Client is the interface (API) between client systems and Sybase servers.
Fundamentally, it comes in two forms:

Runtime

The runtime version is a set of dynamic libraries (dlls on W32 platforms) that
allow client applications to connect to Sybase and Microsoft servers, or, in
fact, any server that implements the Tabular Data Streams (TDS) protocol. You
need some form of Open Client in order to be able to connect to ASE in any way,
shape or form. Even if you are running isql on exactly the same machine as
ASE itself, communication will still be via Open Client. That is not to say
that client to server communication on the same machine will go via the
physical network, that decision is left entirely to the protocol implementation
on the machine in question.

Development

The development version contains all of the libraries from the runtime version,
plus the header files and other files, library files etc, that enable
developers to build client apps that are able to connect to Sybase servers.

Back to top
-------------------------------------------------------------------------------

7.2: What is the difference between DB-lib and CT-lib?

-------------------------------------------------------------------------------

Both DB-lib and CT-lib are libraries that implement the TDS protocol from the
client side.

DB-lib

DB-lib was Sybase's first version. It was a good first attempt, but has/had a
number of inconsistencies. There are, or possibly were, a lot of applications
written using DB-lib. If you are about to start a new Open Client development,
consider using CT-lib, it is the preferred choice. (What version of TDS does
DB-lib, is it only 4.2?)

Having said that you should use CT-lib for new developments, there is one case
that this may not be true for and that is 2 phase commit. 2 phase commit is
supported directly by DB-lib but is not supported directly by CT-lib.

CT-lib

CT-lib is a completely re-written version of Open Client that was released in
the early '90s. The API is totally different from DB-lib, and is much more
consistent. Applications written using DB-lib cannot simply be compiled using
CT-lib, they need a significant amount of porting effort. CT-lib is newer,
more consistent and, in several people's opinions, including mine, slightly
longer winded. Having said that, the future of DB-lib is uncertain and is
certainly not being developed any more, as a result all new apps should be
written using CT-lib.

Back to top
-------------------------------------------------------------------------------

7.3: What is this TDS protocol?

-------------------------------------------------------------------------------

Tabular Data Streams or TDS is the name given to the protocol that is used to
connect Sybase clients with Sybase servers. A specification for the protocol
can be obtained from Sybase, I had a copy but cannot seem to find it now.

The is a project that is reverse engineering the protocol and building a set of
libraries independent of either Sybase or Microsoft, but able to connect to
either of their servers. FreeTDS is a considerable way down the line, although
I do not believe that it is production ready yet!

As part of the project, they have started to document the protocol, and a view
of TDS 5.0 can be seen here.

Back to top
-------------------------------------------------------------------------------

7.4: I have upgraded to MS SQL Server 7.0 and can no longer connect from
Sybase's isql.

-------------------------------------------------------------------------------

Microsoft SQL Server has always supported the TDS protocol, and up to release 7
it has been the primary means of communication between clients and servers.
With release 7, TDS has been reduced to being a "legacy" protocol. (I do not
know what the communication protocol/mechanism with release 7 is, you will need
to talk to someone from Microsoft or search comp.databases.ms-sqlserver .)

In order to connect to MS Sql Server 7 using Sybase's Open Client you will need
to install Service Pack 2 of SQL Server 7, available from http://
www.microsoft.com.

Back to top
-------------------------------------------------------------------------------

7.5: The Basics of Connecting to Sybase

-------------------------------------------------------------------------------

The following describes how to connect to Sybase ASE on a UNIX machine from a
windows client with isql etc. The specific example is Sybase ASE 11.9 on
Redhat Linux 6.1, using Windows 95 and NT. (Have both on partitions and the
process was the same... This is not a technical review or an in-depth
discussion (there are people far more qualified than me for that ;-) ). Rather
it is more along the lines of "This is how I managed it, it should work for
you". As always there are no guarantees, so it if goes wrong, it's your fault
[<g>].

The starting point for this discussion has to be, you've downloaded (or
whatever means you used to acquire it) both Sybase ASE for Linux and the PC
Client software (a big zip file) and are ready to install. I'm not going to
discuss the install process as Sybase managed to do a good job of that, so
I'm leaving well alone. The bit you have to take notice of is when you run
srvbuild. This should happen the first time you log on as the user sybase after
the install. If it doesn't then you can run it by hand after, it line in the
$SYBASE directory under bin. The reason why I'm mentioning this is that
srvbuild defaults to installing your database using the name "localhost". Now
the problem with localhost is that it is kind of a special case and would mean
that you could not connect to your database from anywhere other that the server
itself. This would defeat the object of this
discussion, so simply name it something else, bob, george, albert, mydatabase,
whatever, the choice is yours.

Having done this (it takes a while to complete) you should now have a running
database. so try to connect to it on the local machine with something like
"isql -SServerName -Usa", (where ServerName is whatever you called it when you
ran srvbuild) when it asks for a password, just press enter and you should be
greeted by the monumentous welcome
1>

Not a lot for all the work you have done to get to this point, but you've
connected to your database and that's the main thing. This is very important as
not only does this mean that your database is working, but it also means that
the server half of Open Client is working. This is because even isql on the
server connects to the database using Open Client and you've just proved it
works, cool. Next run dsedit on the server and make a note of the following 3
things:


1: The server name
2: The IP address
3: The port

Your going to need these to get connected from windows.

Now switch to you windows machine, did I remember to tell you to shut down
dsedit on the server?, consider it said ;-). Unpack the PC Client software zip
file and install it using the instructions that came with it. They worked fine
for me and I'm an idiot, so they should work for you. When you've finished, go
to the start menu and start dsedit (on my machine it's under programs ->
sybase). When it runs, it begins with a dialog asking you which Interface
driver to open, I've done this 3 times and went with the default every time, so
it should be a safe bet. At this point you can now add your Linux based server.
Select the menu item serverobject->add. Then enter the name of the server you
just got from your Linux box, in the field labeled "server". It is probably a
good idea that it is the same name you got from your Linux based dsedit to
ensure that everyone is referring to the same server with the same name.
Prevents confusion. This then opens a new window with several fields, one of
which is the server name you just entered. The bottom field is the bit where
you enter the "nitty gritty", the server IP address and port. To do this right
click on the field and select "modify attribute" to open the server address
dialog. When this new dialog opens click add to open yet another dialog (is
there an award for the most gratuitous use of the word dialog???). OK, this is
the last one, honest. Leave the drop down list where it is (hopefully showing
TCP/IP or something similar). Instead move straight to the address field and
enter the following: the Linux servers IP address followed by the port number
(the one from the server dsedit), separated by a comma. On my machine it looks
like this.
192.0.0.2,2501

Now you can "OK" your way back out of the dialogs, back up to where you started
from and exit dsedit. Then launch isql on the windows box and log in.
Personally I did this from a DOS prompt, using exactly the same syntax I did on
the Linux box, but that's just because I like it that way. Now you should be
happily querying you Linux (or other UNIX for that matter) based Sybase ASE
database. What you do with it now, is covered elsewhere in this FAQ from people
able to tell you, unlike me. Now just one more time for good measure, I'm going
to type the word, wait for it.... Dialog.

Back to top
-------------------------------------------------------------------------------

7.6: Connecting to Sybase Using OLTP

-------------------------------------------------------------------------------

To begin with you need to be certain that you can connect to your Linux hosted
Sybase ASE database from your windows based machine. Do this by running isql
from your Linux box and connect to the database, if this works, then your all
set (See Q7.5). You will need the Sybase ODBC driver, this came with the PC
Client package. If you got your Windows Open Client software through some other
means, then you may need to down load the ODBC driver, this will become
apparent later. Right, begin by launching the 32 bit ODBC administrator, either
from the Sybase menu under start -> programs or the control panel. Ensure that
you are displaying the "user DSN" section (by clicking on the appropriate tab).

You can then click on the button labeled add to move to the driver selection
dialog. Select Sybase System 11 and click on finish. You will by now have
noticed that this is Microsoft's way of taunting you and you haven't actually
finished yet, you're actually at the next dialog. What you have actually done
is told windows that you are now about to configure your Sybase ODBC driver.
There are 4 boxes on the dialog with which you are now presented, and they are:


Data Source Name
Description
Server Name
Database Name

The data source name is the Server name from your interfaces file on your Linux
server. If you are uncertain of any of these values, then log onto your Linux
box, run dsedit and take a look. It will only take you 2 minutes and much
easier than debugging it later. The description field is irrelevant and you can
put anything in there that is meaningful to you. Server name is the IP address
of the Linux server, that is hosting your database. Database name is the name
of a database to which you want to connect, once your Sybase connection has
been established. If in doubt, you can stick master in there for now, at least
you'll get a connection. Now you can click on OK to get back to the starting
screen, followed by another OK to exit ODBC administrator. We will now test the
connection by running Sybase Central. I chosen this because I figure that if
you downloaded the PC Client package, then I know you've got it (at least I'm
fairly sure). When you launch Sybase administrator from start->programs->
Sybase, you are presented with a connection dialog. There are 3 fields in this
box


User ID
Password
Server Name

In the field labeled UserID, you can type in sa. If you've been doing some work
on Sybase through other means and you have already created a valid user, then
you can use him (her, it, whatever). In the password field, type in the
appropriate password. Assuming you have changed nothing from the
original Sybase install and you are using sa, then you will leave this blank.
The final field is a dropdown list box containing all the Sybase remote
connections you have. Assuming you only have the one, then you can leave this
alone. If you have more than one, stick to the one that you know works for now
and that allows access to the user you've used. In simple English (and if you
don't speak English, then I hope somebody has translated it :-) ). If this is a
clean install and you have altered nothing after following the instruction
earlier to establish an Open Client, then the top box should contain simply
"sa", the middle box should be blank, and the bottom list-box should contain
whatever the servername is in your Linux based interfaces file. Clicking on OK
will now connect Sybase Central to the database and "away you go"...

Hope this is of some assistance to you, but if you run into problems then I
suggest you post to the newsgroup, which is where the real experts hang out. I
am unlikely to be able to help you, as I have simply noted down my experiences
as I encountered them, in the hope they may help somebody out.
I take no responsibility for anything, including any result of following the
instructions in this text.
Good luck...

Jim

David Owen

unread,
Jan 24, 2001, 5:14:17 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part14

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

6.2.7: Hierarchy traversal - BOMs

-------------------------------------------------------------------------------

Alright, so you wanna know more about representing hierarchies in a relational
database? Before I get in to the nitty gritty I should at least give all of the
credit for this algorithm to: "_Hierarical_Structures:_The_Relational_Taboo!_,
_(Can_ Transitive_Closure_Queries_be_Efficient?)_", by Michael J. Kamfonas as
published in 1992 "Relational Journal" (I don't know which volume or issue).

The basic algorithm goes like this, given a tree (hierarchy) that looks roughly
like this (forgive the ASCII art--I hope you are using a fixed font to view
this):
a
/ \
/ \
/ \
b c
/ \ /|\
/ \ / | \
/ \ / | \
d e f | g


Note, that the tree need not be balanced for this algorithm to work.

The next step assigned two numbers to each node in the tree, called left and
right numbers, such that the left and right numbers of each node contain the
left and right numbers of the ancestors of that node (I'll get into the
algorithm for assigning these left and right numbers later, but, hint: use a
depth-first search):
1a16
/ \
/ \
/ \
2b7 8c15
/ \ /|\
/ \ / | \
/ \ / | \
3d4 5e6 9f10 11g12 13h14


Side Note: The careful observer will notice that these left and right
numbers look an awful lot like a B-Tree index.

So, you will notice that all of the children of node 'a' have left and right
numbers between 1 and 16, and likewise all of the children of 'c' have left and
right numbers between 8 and 15. In a slightly more relational format this table
would look like:
Table: hier
node parent left_nbr right_nbr
----- ------ -------- ---------
a NULL 1 16
b a 2 7
c a 8 15
d b 3 4
e b 5 6
f c 9 10
g c 11 12
h c 13 14

So, given a node name, say @node (in Sybase variable format), and you want to
know all of the children of the node you can do:
SELECT h2.node
FROM hier h1,
hier h2
WHERE h1.node = @node
AND h2.left_nbr > h1.left_nbr
AND h2.left_nbr < h1.right_nbr

If you had a table that contained, say, the salary for each node in your
hierarchy (assuming a node is actually a individual in a company) you could
then figure out the total salary for all of the people working underneath of
@node by doing:
SELECT sum(s.salary)
FROM hier h1,
hier h2,
salary s
WHERE h1.node = @node
AND h2.left_nbr > h1.left_nbr
AND h2.right_nbr > h1.right_nbr
AND s.node = h2.node

Pretty cool, eh? And, conversely, if you wanted to know how much it cost to
manage @node (i.e. the combined salary of all of the boss's of @node), you can
do:
SELECT sum(s.salary)
FROM hier h1,
hier h2,
salary s
WHERE h1.node = @node
AND h2.left_nbr < h1.left_nbr
AND h2.left_nbr > h1.right_nbr
AND s.node = h2.node

Now that you can see the algorithm in action everything looks peachy, however
the sticky point is the method in which left and right numbers get assigned.
And, unfortunately, there is no easy method to do this relationally (it can be
done, it just ain't that easy). For an real- world application that I have
worked on, we had an external program used to build and maintain the
hierarchies, and it was this program's responsibility to assign the left and
right numbers.

But, in brief, here is the algorithm to assign left and right numbers to every
node in a hierarchy. Note while reading this that this algorithm uses an array
as a stack, however since arrays are not available in Sybase, they are
(questionably) emulated using a temp table.
DECLARE @skip int,
@counter int,
@idx int,
@left_nbr int,
@node varchar(10)

/*-- Initialize variables --*/
SELECT @skip = 1000, /* Leave gaps in left & right numbers */
@counter = 0, /* Counter of next available left number */
@idx = 0 /* Index into array */

/*
* The following table is used to emulate an array for Sybase,
* for Oracle this wouldn't be a problem. :(
*/
CREATE TABLE #a (
idx int NOT NULL,
node varchar(10) NOT NULL,
left_nbr int NOT NULL
)

/*
* I know that I always preach about not using cursors, and there
* are ways to get around it, but in this case I am more worried
* about readability over performance.
*/
DECLARE root_cur CURSOR FOR
SELECT h.node
FROM hier h
WHERE h.parent IS NULL
FOR READ ONLY

/*
* Here we are populating our "stack" with all of the root
* nodes of the hierarchy. We are using the cursor in order
* to assign an increasing index into the "stack"...this could
* be done using an identity column and a little trickery.
*/
OPEN root_cur
FETCH root_cur INTO @node
WHILE (@@sqlstatus = 0)
BEGIN
SELECT @idx = @idx + 1
INSERT INTO #a VALUES (@idx, @node, 0)
FETCH root_cur INTO @node
END
CLOSE root_cur
DEALLOCATE CURSOR root_cur

/*
* The following cursor will be employed to retrieve all of
* the children of a given parent.
*/
DECLARE child_cur CURSOR FOR
SELECT h.node
FROM hier h
WHERE h.parent = @node
FOR READ ONLY

/*
* While our stack is not empty.
*/
WHILE (@idx > 0)
BEGIN
/*
* Look at the element on the top of the stack.
*/
SELECT @node = node,
@left_nbr = left_nbr
FROM #a
WHERE idx = @idx

/*
* If the element at the top of the stack has not been assigned
* a left number yet, then we assign it one and copy its children
* on the stack as "nodes to be looked at".
*/
IF (@left_nbr = 0)
BEGIN
/*
* Set the left number of the current node to be @counter + @skip.
* Note, we are doing a depth-first traversal, assigning left
* numbers as we go.
*/
SELECT @counter = @counter + @skip
UPDATE #a
SET left_nbr = @counter
WHERE idx = @idx

/*
* Append the children of the current node to the "stack".
*/
OPEN child_cur
FETCH child_cur INTO @node
WHILE (@@sqlstatus = 0)
BEGIN
SELECT @idx = @idx + 1
INSERT INTO #a VALUES (@idx, @node, 0)
FETCH child_cur INTO @node
END
CLOSE child_cur

END
ELSE
BEGIN
/*
* It turns out that the current node already has a left
* number assigned to it, so we just need to assign the
* right number and update the node in the actual
* hierarchy.
*/
SELECT @counter = @counter + @skip

UPDATE h
SET left_nbr = @left_nbr,
right_nbr = @counter
WHERE h.node = @node

/*
* "Pop" the current node off our "stack".
*/
DELETE #a WHERE idx = @idx
SELECT @idx = @idx - 1
END
END /* WHILE (@idx > 0) */
DEALLOCATE CURSOR child_cur

While reading through this, you should notice that assigning the left and right
numbers to the entire hierarchy is very costly, especially as the size of the
hierarchy grows. If you put the above code in an insert trigger on the hier
table, the overhead for inserting each node would be phenomenal. However, it is
possible to reduce the overall cost of an insertion into the hierarchy.

1. By leaving huge gaps in the left & right numbers (using the @skip variable),
you can reduce the circumstances in which the numbers need to be reassigned
for a given insert. Thus, as long as you can squeeze a new node between an
existing pair of left and right numbers you don't need to do the
re-assignment (which could affect all of the node in the hierarchy).
2. By keeping an extra flag around in the hier table to indicate which nodes
are leaf nodes (this could be maintained with a trigger as well), you avoid
placing leaf nodes in the array and thus reduce the number of updates.

Deletes on this table should never cause the left and right numbers to be
re-assigned (you could even have a trigger automagically re-parent orphaned
hierarchy nodes).

All-in-all, this algorithm is very effective as long as the structure of the
hierarchy does not change very often, and even then, as you can see, there are
ways of getting around a lot of its inefficiencies.

Back to top
-------------------------------------------------------------------------------

6.2.8: Calling OS commands from a trigger or a stored procedure

-------------------------------------------------------------------------------

11.5 and above

The Adaptive Server (11.5) will allow O/S calls from within stored procedures
and triggers. These stored procedures are known as extended stored procedures.

Pre-11.5

Periodically folks ask if it's possible to make a system command or call a UNIX
process from a Trigger or a Stored Procedure.

Guaranteed Message Processing

The typical ways people have implemented this capability is:

1. Buy Open Server and bind in your own custom stuff (calls to system() or
custom C code) and make Sybase RPC calls to it.
2. Have a dedicated client application running on the server box which
regularly scans a table and executes the commands written into it (and
tucks the results into another table which can have a trigger on it to
gather results...). It is somewhat tricky but cheaper than option 1.

Sybase SQL Server 10.0.2.5 and Above - syb_sendmsg()

This release includes a new built-in function called syb_sendmsg(). Using this
function you can send a message up to 255 bytes in size to another application
from the SQL Server. The arguments that need to be passed to syb_sendmsg() are
the IP address and port number on the destination host, and the message to be
sent. The port number specified can be any UDP port, excluding ports 1-1024,
not already in use by another process. An example is:
1> select syb_sendmsg("120.10.20.5", 3456, "Hello")
2> go

This will send the message "Hello" to port 3456 at IP address '120.10.20.5'.
Because this built-in uses the UDP protocol to send the message, the SQL Server
does not guarantee the receipt of the message by the receiving application.


Also, please note that there are no security checks with this new function.
It is possible to send sensitive information with this command and Sybase
strongly recommends caution when utilizing syb_sendmsg to send sensitive
information across the network. By enabling this functionality, the user
accepts any security problems which result from its use (or abuse).

To enable this feature you should run the following commands as the System
Security Officer.

1. Login to the SQL Server using 'isql'.
2. Enable the syb_sendmsg() feature using sp_configure.
1> sp_configure "allow sendmsg", 1
2> go

1> sp_configure "syb_sendmsg port number", <port number>
2> go

1> reconfigure with override -- Not necessary with 11.0 and above
2> go

The server must be restarted to set the port number.

Using syb_sendmsg() with Existing Scripts

Since syb_sendmsg() installs configuration parameter "allow sybsendmsg",
existing scripts that contain the syntax
1> sp_configure allow, 1
2> go

to enable updates to system tables should be altered to be fully qualified as
in the following:
1> sp_configure "allow updates", 1
2> go

If existing scripts are not altered they will fail with the following message:
1> sp_configure allow, 1
2> go
Configuration option is not unique.
duplicate_options
----------------------------
allow updates
allow sendmsg

(return status = 1)

(The above error is a little out of date for the latest releases of ASE, there
are now 8 rows that contain "allow", but the result is the same.)

Backing Out syb_sendmsg()

The syb_sendmsg() function requires the addition on two config values. If it
becomes necessary to roll back to a previous SQL Server version which does not
include syb_sendmsg(), please follow the instructions below.

1. Edit the RUNSERVER file to point to the SWR SQL Server binary you wish to
use.
2. isql -Usa -P<sa password> -Sserver_name -n -iunconfig.sendmsg -ooutput_file

Sample C program

#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <unistd.h>
#include <fcntl.h>

main(argc, argv)
int argc; char *argv[];
{

struct sockaddr_in sadr;
int portnum,sck,dummy,msglen;
char msg[256];

if (argc <2) {
printf("Usage: udpmon <udp portnum>\n");
exit(1);
}

if ((portnum=atoi(argv[1])) <1) {
printf("Invalid udp portnum\n");
exit(1);
}

if ((sck="socket(AF_INET,SOCK_DGRAM,IPPROTO_UDP))" < 0) {
printf("Couldn't create socket\n");
exit(1);
}

sadr.sin_family = AF_INET;
sadr.sin_addr.s_addr = inet_addr("0.0.0.0");
sadr.sin_port = portnum;

if (bind(sck,&sadr,sizeof(sadr)) < 0) {
printf("Couldn't bind requested udp port\n");
exit(1);
}

for (;;)
{

if((msglen="recvfrom(sck, msg, sizeof(msg), 0, NULL, &dummy))" < 0)
printf("Couldn't recvfrom() from udp port\n");

printf("%.*s\n", msglen, msg);
}
}

Back to top
-------------------------------------------------------------------------------

6.2.9: Identities and Sequential Keys

-------------------------------------------------------------------------------

This has several sections, culled from various sources. It is better described
as "Everything you've ever wanted to know about identities." It will serve to
answer the following frequently asked questions:

What are the Features and Advantages of using Identities?
What are the Problems with and Disadvantages of Identities?
Common Questions about Identities

*Is Identity the equivalent of Oracle's Auto-sequencing?
*How do I configure a table to use the Identity field?
*How do I configure the burn factor?
*How do I find out if my tables have Identities defined?
*What is my current identity burn factor vulnerability?

How do I optimize the performance of a table that uses Identities?
How do I recover from a huge gap in my identity column?
How do I fix a table that has filled up its identity values?

OK, I hate identities. How do I generate sequential keys without using the
Identity feature?
How do I optimize a hand-made sequential key system for best performance?

- Question 8.1 of the comp.database.sybase FAQ has a quick blurb about
identities and sequential numbers. Search down in the page for the section
titled, "Generating Sequential Numbers." Question 8.1 is a general document
describing Performance and Tuning topics to be considered and thus doesn't go
into as much detail as this page.

- There's a white paper by Malcolm Colton available from the sybase web site.
Goto the Sybase web site http://www.sybase.com and type Surrogate in the search
form. Select the Surrogate Primary Keys, Concurrency, and the Cache Hit Ratio
document.
-------------------------------------------------------------------------------


Advantages/Features of Using Identities


There's an entire section devoted to Identity columns in the SQL Server
Reference manual, Chapter 5

Sybase System 10 introduced many changes over the 4.9.x architecture. One of
these changes was the Identity feature. The identity column is a special column
type that gets automatically updated by the server upon a new row insert. Its
purpose is to guarantee a unique row identifier not based on the other data in
the row. It was integrated with the server and made memory based for fast value
retrieval and no locking (as was/is the case with homegrown sequential key
generation schemes).

The Advantages and Features of Identities include:

*A non-SQL based solution to the problem of having an default unique value
assigned to a row. SQL Server prefetches identity values into cache and
adds them automatically to rows as they're inserted into tables that have a
type Identity column. There's no concurrency issues, no deadlocking in
high-insert situations, and no possibility of duplicate values.
*A high performance Unique identifier; SQL server's optimizer is tuned to
work well with Unique indexes based on the identity value.
*The flexibility to insert into the identity field a specific value in the
case of a mistaken row deletion. (You can never update however). You
accomplish this by:
1> set identity_insert [datababase]..[table] on
2> go

Note however that the System will not verify the uniqueness of the value
you specifically insert (unless of course you have a unique index existing
on the identity column).

*The flexibility during bcp to either retain existing identity values or
to reset them upon bcping back in. To retain the specific identity values
during a bcp out/in process, bcp your data out normally (no special
options). Then create your bcp in target table with ddl specifying the
identity column in the correct location. Upon bcp'ing back in, add the "-E"
option at the end of the bcp line, like this (from O/S prompt):
% bcp [database]..[new_table] in [bcp datafile] -Usa -S[server] -f [fmt file] -E

For procedures on resetting identity values during a bcp, see the section
regarding Identity gaps.

*Databasewide Identity options: 1) The ability to set Sybase to
automatically create an Identity column on any table that isn't created
with a primary key or a unique constraint specified. 2) Sybase can
automatically include an Identity field in all indexes created,
guaranteeing all will be unique. These two options guarantee increased
index performance optimization and guarantees the use of updateable cursors
and isolation level 0 reads.
These features are set via sp_dboption, like this:
1> sp_dboption [dbname], "auto identity", true
2> go
or
1> sp_dboption [dbname], "identity in nonunique index", true
2> go

To tune the size of the auto identity (it defaults to precision 10):
1> sp_configure "size of auto identity", [desired_precision]
2> go

(the identity in nonunique index db_option and the size of auto identity
sp_configure value are new with System 11: the auto identity existed with
the original Identity feature introduction in System 10)

Like other dboptions, you can set these features on the model database
before creating new databases and all your future databases will be
configured. Be warned of the pitfalls of large identity gaps however; see
the question regarding Burn Factor Vulnerability in the Common Questions
about Identities section.

*The existence of the @@identity global variable, which keeps track of the
identity value assigned during the last insert executed by the server. This
variable can be used programming SQL around tables that have identity
values (in case you need to know what the last value inserted was). If the
last value inserted in the server was to a non-identity table, this value
will be "0."

Back to start of 6.2.9
-------------------------------------------------------------------------------


Disadvantages/Drawbacks of Using Identities

Despite its efficacy of use, the Identity has some drawbacks:

*The mechanism that Sybase uses to allocate Identities involves a memory
based prefetch scheme for performance. The downside of this is, during
non-normal shutdowns of the SQL server (shutdown with nowait or flat out
crashes) the SQL server will simply discard or "burn" all the unused
identity values it has pre-allocated in memory. This sometimes leaves large
"gaps" in your monotonically increasing identity columns and can be
unsettling for some application developers and/or end users.

NOTE: Sybase 11.02.1 (EBF 6717) and below had a bug (bugid 96089) which
would cause "large gaps to occur in identity fields after polite
shutdowns." The Sybase 11.02.2 rollup (EBF 6886) fixed this problem. If
you're at or below 11.02.1 and you use identities, you should definitely
upgrade.

*(paraphrased from Sybooks P&T guide, Chapter 6): If you do a large number
of inserts and you have built your clustered index on an Identity column,
you will have major contention and deadlocking problems. This will
instantly create a hot spot in your database at the point of the last
inserted row, and it will cause bad contention if multiple insert requests
are received at once. Instead, create your clustered index on a field that
will somewhat randomize the inserts across the physical disk (such as last
name, account number, social security number, etc) and then create a
non-clustered index based on the identity field that will "cover" any
eligible queries.

The drawback here, as pointed out in the Identity Optimization section in
more detail, is that clustering on another field doesn't truly resolve the
concurrency issues. The hot spot simply moves from the last data page to
the last non-clustered index page of the index created on the Identity
column.


*If you fill up your identity values, no more inserts can occur. This can
be a big problem, especially if you have a large number of inserts and you
have continually crashed your server. However this problem most often
occurs when you try to alter a table and add an Identity column that's too
small, or if you try to bcp into a table with an identity column thetas too
small. If this occurs, follow the procedures for recovering from identity
gaps.
*I've heard (but not been able to reproduce) that identities jump
significantly when dumping and loading databases. Not confirmed.


NOTE: there are several other System 11 bugs related to Identities. EBF
7312 fixes BugId 97748, which caused duplicate identity values to be
inserted at times. EBF 6886 fixed (in addition to the above described bug)
an odd bug (#82460) which caused a server crash when bcping into a table w/
an identity added via alter table. As always, try to stay current on EBFs.

Back to start of 6.2.9
-------------------------------------------------------------------------------


Common questions about Identities

Is the Identity the equivalent of Oracle's auto-sequencing?:

Answer: More or less yes. Oracle's auto-sequencing feature is somewhat
transparent to the end user and automatically increments if created as a
primary key upon a row insert. The Sybase Identity column is normally specified
at table creation and thus is a functional column of the table. If however you
set the "auto identity" feature for a database, the tables created will have a
"hidden" identity column that doesn't even appear when you execute a select *
from [table]. See the Advantages of Identities for more details.

*How do I configure Identities?: You can either create your table
initially with the identity column:
1> create table ident_test
2> (text_field varchar(10),
3> ident_field numeric(5,0) identity)
4> go

Or alter an existing table and add an identity column:
1> alter table existing_table
2> add new_identity_field numeric(7,0) identity
3> go

When you alter a table and add an identity column, the System locks the
table while systematically incrementing and adding unique values to each
row. IF YOU DON'T SPECIFY a precision, Sybase defaults the size to 18!
Thats 1,000,000,000,000,000,000-1 possible values and some major major
problems if you ever crash your SQL server and burn a default number of
values... (10^18 with the default burn factor will burn 5^14 or
500,000,000,000,000 values...yikes).



*How do I Configure the burn factor?: The number of identity values that
gets "burned" upon a crash or a shutdown can by found by logging into the
server and typing:
1> sp_configure "identity burning set factor"
2> go

the Default value set upon install is 5000. The number "5000" in this case
is read as ".05% of all the potential identity values you can have in this
particular case will be burned upon an unexpected shutdown." The actual
number depends on the size of the identity field as you specified it when
you created your table.

To set the burn factor, type:
1> sp_configure "identity burning set factor", [new value]
2> go

This is a static change; the server must be rebooted before it takes
effect.



*How do I tell which tables have identities?: You can tell if a table has
identities one of two ways:

1. sp_help [tablename]: there is a field included in the sp_help output
describing a table called "Identity." It is set to 1 for identity
fields, 0 otherwise.
2. Within a database, execute this query:
1> select object_name(id) "table",name "column", prec "precision"
2> from syscolumns
3> where convert(bit, (status & 0x80)) = 1
4> go


this will list all the tables and the field within the table that serves as
an identity, and the size of the identity field.



*What is my identity burn factor vulnerability right now?:
In other words, what would happen to my tables if I crashed my server right
now?

Identities are created type numeric, scale 0, and precision X. A precision
of 9 means the largest identity value the server will be able to process is
10^9-1, or 1,000,000,000-1, or 999,999,999. However, when it comes to
Burning identities, the server will burn (based on the default value of
5000) .05% of 1,000,000,000 or 500,000 values in the case of a crash. (You
may think an identity precision allowing for 1 Billion rows is optimistic,
but I once saw a precision set at 14...then the database crashed and their
identity values jumped 5 TRILLION. Needless to say they abandoned their
original design. Even worse, SQL server defaults precision to 18 if you
don't specify it upon table creation...that's a MINIMUM 10,000,000,000 jump
in identity values upon a crash with the absolute minimum burn factor)

Lets say you have inserted 5 rows into a table, and then you crash your
server and then insert 3 more rows. If you select all the values of your
identity field, it will look like this:
1> select identity_field from id_test
2> go
identity_field
--------------
1
2
3
4
5
500006
500007
500008

(8 rows affected)

Here's your Identity burning options (based on a precision of 10^9 as
above):
Burn value % of values # values burned during crash
5000 .05% 500,000
1000 .01% 100,000
100 .001% 10,000
10 .0001% 1,000
1 .00001% 100

So, the absolute lowest amount of numbers you'll burn, assuming you
configure the burn factor down to 1 (sp_configure "identity burning set
factor", 1) and a precision of 9, is 100 values.

Back to start of 6.2.9
---------------------------------------------------------------------------

Optimizing your Identity setup for performance and maintenance

If you've chosen to use Identities in your database, here are some
configuration tips to avoid typical Identity pitfalls:
+Tune the burn factor!: see the vulnerability section for a discussion
on what happens to identity values upon SQL server crashes. Large jumps
in values can crash front ends that aren't equipped to handle and
process numbers upwards of 10 Trillion. I've seen Powerbuilder
applications crash and/or not function properly when trying to display
these large identity values.
+Run update statistics often on tables w/ identities: Any index with
an identity value as the first column in the search condition will have
its performance severely hampered if Update statistics is not run
frequently. Running a nightly update statistics/sp_recompile job is a
standard DBA task, and should be run often regardless of the existence
of identities in your tables.
+Tune the "Identity Grab Size": SQL server defaults the number of
Identity values it pre-fetches to one (1). This means that in high
insert environments the Server must constantly update its internal
identity placeholder structure before adding the row. By tuning this
parameter up:
1> sp_configure "identity grab size", [number]
2> go

You can prefetch larger numbers of values for each user as they log
into the server an insert rows. The downside of this is, if the user
doesn't use all of the prefetched block of identity values, the unused
values are lost (seeing as, if another user logs in the next block gets
assigned to him/her). This can quickly accelerate the depletion of
identity values and can cause gaps in Identity values.
(this feature is new with System 11)

+Do NOT build business rules around Identity values. More generally
speaking the recommendation made by DBAs is, if your end users are EVER
going to see the identity field during the course of doing their job,
then DON'T use it. If your only use of the Identity field is for its
advertised purpose (that being solely to have a uniquely identifying
row for a table to index on) then you should be fine.
+Do NOT build your clustered index on your Identity field, especially
if you're doing lots of inserts. This will create a hot spot of
contention at the point of insertion, and in heavier OLTP environments
can be debilitating.

- There is an excellent discussion located in the whitepapers section of
Sybase's home page discussing the performance and tuning aspects of
Identities. It supplements some of the information located here (Note: this
will open in a new browser window).

Back to start of 6.2.9
---------------------------------------------------------------------------

Recovery from Large Identity value gaps or
Recovery from Identity insert errors/Full Identity tables


This section will discuss how to re-order the identity values for a table
following a crash/abnormal shutdown that has resulted in huge gaps in the
values. The same procedure is used in cases where the identity field has
"filled up" and does not allow inserts anymore. Some applications that use
Identities are not truly candidates for this process (i.e., applications
that depend on the identity field for business purposes as opposed to
simple unique row identifiers). Applications like this that wish to rid
their dependence on identities will have to re-evaluate their database
design.
+Method 1:bcp out and in:
- First, (from O/S command line):
% bcp database..table out [data_file] -Usa -S[server] -N

This will create a binary bcp datafile and will force the user to
create a .fmt file. The -N option tells the server to skip the identity
field while bcp'ing out.
- drop and recreate the table in question from ddl (make sure your
table ddl specifies the identity field).
- Now bcp back in:
% bcp database.table in [data_file -Usa -S[server] -f[fmt file] -N

The -N option during bcp in tells the server to ignore the data file's
placeholder column for the defined identity column.


Coincidentally, if you bcp out w/o the -N option, drop the table,
recreate from ddl specifying the identity field, and bcp back in w/o
the -N option, the same effect as above occurs.

(note: if you bcp out a table w/ identity values and then want to
preserve the identity values during the bcp back in, use the "-E"
option.)

+Method 2: select into a new table, adding the identity column as you
go : Follow this process:
1> select [all columns except identity column]
2> [identity column name ] = identity(desired_precision)
3> into [new_table]
4> from [old table]
5> go

+There are alternate methods that perform the above in multi steps,
and might be more appropriate in some situations.
oYou can bcp out all the fields of a table except the identity
column (create the bcp format file from the original table, edit
out the identity column, and re-bcp). At this point you can create
a new table with or without the identity column; if you create it
with, as you bcp back in the Server will assign new identity
values. If you create it without, you can bcp back in normally and
then alter the table and add the identity later.
oYou can select all columns but the identity into a new table,
then alter that table and add an identity later on.


Back to start of 6.2.9
---------------------------------------------------------------------------

How do I generate Sequential Keys w/o the Identity feature?


There are many reasons not to use the Identity feature of Sybase. This
section will present several alternative methods, along with their
advantages and drawbacks. The methods are presented in increasing order of
complexity. The most often implemented is Method 3, which is a more robust
version of Method 2 and which uses a surrogate-key storage table.

Throughout this section the test table I'm adding lines to and generating
sequential numbers for is table inserttest, created like this:
1> create table inserttest
2> (testtext varchar(25), counter int)
3> go
+Method 1: Create your table with a column called counter of type int.
Then, each time you insert a row, do something like this:
1> begin tran
2> declare @nextkey int
3> select @nextkey=max(counter)+1 from inserttest holdlock
4> insert inserttest (testtext,counter) values ("test_text,@nextkey")
5> go


1> commit tran
2> go

This method is rather inefficient, as large tables will take minutes to
return a max(column) value, plus the entire table must be locked for
each insert (since the max() will perform a table scan). Further, the
select statement does not guarantee an exclusive lock when it executes
unless you have the "holdlock" option; so either duplicate values might
be inserted to your target table or you have massive deadlocking.


+Method 2: See Question 10.1.1 of the comp.database.sybase FAQ is the
May 1994 (Volume 3, Number 2) Sybase Technical Note (these links will
open in a new browser window). Search down in the tech note for the
article titled, "How to Generate Sequential Keys for Table Key
Columns." This has a simplistic solution that is expanded upon in
Method 3.


+Method 3: Create a holding table for keys in a common database:
Here's our central holding table.
1> create table keystorage
2> (tablename varchar(25),
4> lastkey int)
5> go

And initially populate it with the tablenames and last values inserted
(enter in a 0 for tables that are brand new).
1> insert into keystorage (tablename,lastkey)
2> select "inserttest", max(counter) from inserttest
3> go

Now, whenever you go to insert into your table, go through a process
like this:
1> begin tran
2> update keystorage set lastkey=lastkey+1 where tablename="inserttest"
3> go

1> declare @lastkey int
2> select @lastkey = lastkey from keystorage where tablename="inserttest"
3> insert inserttest (testtext,counter) values ("nextline",@lastkey)
4> go



1> commit tran
2> go

There is plenty of room for error checking with this process: for
example (code adapted from Colm O'Reilly (co...@mail.lk.blackbird.ie)
post to Sybase-L 6/20/97):
1> begin tran
2> update keystorage set lastkey=lastkey+1 where tablename="inserttest"
3> if @@rowcount=1
4> begin
5> declare @lastkey int
6> select @lastkey=lastkey from keystorage where tablename="inserttest"
7> end
8> commit tran
9> begin tran
10> if @lastkey is not null
11> begin
12> insert inserttest (testtext,counter) values ("third line",@lastkey)
13> end
14> commit tran
15> go

This provides a pretty failsafe method of guaranteeing the success of
the select statements involved in the process. You still have a couple
of implementation decisions though:
oOne transaction or Two? The above example uses two transactions
to complete the task; one to update the keystorage and one to
insert the new data. Using two transactions reduces the amount of
time the lock is held on keystorage and thus is better for high
insertion applications. However, the two transaction method opens
up the possibility that the first transaction will commit and the
second will roll back, leaving a gap in the sequential numbers. (of
course, this gap is small potatoes compared to the gaps that occur
in Identity values). Using one transaction (deleting lines 8 and 9
in the SQL above) will guarantee absolutely no gaps in the values,
but will lock the keystorage table longer, reducing concurrency in
high insert applications.
oUpdate first or select first? The examples given generally update
the keystorage table first, THEN select the new value. Performing
the select first (you will have to rework the creation scheme
slightly; by selecting first you're actually getting the NEXT key
to add, where as by updating first, the keystorage table actually
holds the LAST key added) you allow the application to continue
processing while it waits for the update lock on the table.
However, performing the update first guarantees uniqueness (selects
are not exclusive).


Some DBAs experienced with this keystorage table method warn of large
amounts of blocking in high insert activity situations, a potential
drawback.


+Method 4: Enhance the above method by creating an insert trigger on
your inserttest table that performs the next-key obtainment logic. Or
you could create an insert trigger on keystorage which updates the
table and obtains your value for you. Integrating the trigger logic to
your application might make this approach more complex. Also, because
of the nature of the trigger you'll have to define the sequence number
columns as allowing NULL values (a bad thing if you're depending on the
sequential number as your primary key). Plus, triggers will slow the
operation down because after obtaining the new value via trigger,
you'll have to issue an extra update command to insert the rest of your
table values.
+Method 5: (Thanks to John Drevicky (jdre...@tca-techsys.com))
The following procedure is offered as another example of updating and
returning the Next Sequential Key, with an option that allows automatic
reuse of numbers......
-----------------------------------------------------------------
----
--
DECLARE @sql_err int, @sql_count int
--
begin tran
--
select @out_seq = 0
--
UPDATE NEXT_SEQUENCE
SET next_seq_id
= ( next_seq_id
* ( sign(1 + sign(max_seq_id - next_seq_id) ) -- evaluates: 0 [when
-- next > max]; else 1
* sign(max_seq_id - next_seq_id) -- evaluates: 0 [when next = max];
-- 1 [next < max];
-- -1 [next > max]
) -- both evaluate to 1 when next < max
) + 1 -- increment by [or restart at] 1
WHERE seq_type = @in_seq_type
--
select @sql_err = @@error, @sql_count = @@rowcount
--
IF @sql_err = 0 and @sql_count = 1
BEGIN
select @out_seq = next_seq_id
from NEXT_SEQUENCE
where seq_type = @in_seq_type
--
commit tran
return 0
END
ELSE
BEGIN
RAISERROR 44999 'Error %1! returned from proc derive_next_sequence...no update occurred', @sql_err
rollback tran
END

+Other Methods: there are several other implementation alternatives
available that involve more complex logic but which might be good
solutions. One example has a central table that stores pre-inserted
sequential numbers that are deleted as they're inserted into the
production rows. This method allows the sequence numbers to be recycled
if their associated row is deleted from the production table. An
interesting solution was posted to Sybase-L 6/20/97 by Matt Townsend (
mto...@concentric.net) and is based on the millisecond field of the
date/time stamp. His solution guarantees uniqueness without any
surrogate tables or extra inserts/updates, and is a superior performing
solution to other methods described here (including Identities), but
cannot support exact sequential numbers. Some other solutions are
covered in a white paper available at Sybase's Technical library
discussing Sequential Keys (this will open in a new browser window).

Back to start of 6.2.9
---------------------------------------------------------------------------

Optimizing your home grown Sequential key generating process for any
version of Sybase

+max_rows_per_page/fillfactor/table padding to simulate row level
locking: This is the most important tuning mechanism when creating a
hand -made sequence key generation scheme. Because of Sybase's page
level locking mechanism, your concurrency performance in higher-insert
activity situations could be destroyed unless the server only grabs one
row at a time. However since Sybase doesn't currently have row-level
locking, we simulate row-level locking by creating our tables in such a
way as to guarantee one row per 2048 byte page.
oFor pre-System 11 servers; Calculate the size of your rows, then
create dummy fields in the table that get populated with junk but
which guarantee the size of the row will fill an entire page. For
example (code borrowed from Gary Meyer's 5/8/94 ISUG presentation (
gme...@netcom.com)):
1> create table keystorage
2> (tablename varchar(25),
3> lastkey int,
4> filler1 char(255) not null,
5> filler2 char(255) not null,
6> filler3 char(255) not null,
7> filler4 char(255) not null,
8> filler5 char(255) not null,
9> filler6 char(255) not null,
9> filler7 char(255) not null)
10> with fillfactor = 100
11> go

We use 7 char(255) fields to pad our small table. We also specify
the fillfactor create table option to be 100. A fillfactor of 100
tells the server to completely fill every data page. Now, during
your initial insertion of a line of data, do this:
1> insert into keystorage
2> (tablename,lastkey,
3> filler1,filler2,filler3,filler4,filler5,filler6,filler7)
4> values
5> ("yourtable",0,
6> replicate("x",250),replicate("x",250),
7> replicate("x",250),replicate("x",250),
8> replicate("x",250),replicate("x",250),
9> replicate("x",250))
10> go

This pads the row with 1750 bytes of junk, almost guaranteeing
that, given a row's byte size limit of 1962 bytes (a row cannot
span more than one page, thus the 2048 page size minus server
overhead == 1962), we will be able to simulate row level locking.

oIn Sybase 11, a new create table option was introduced:
max_rows_per_page. It automates the manual procedures above and
guarantees at a system level what we need to achieve; one row per
page.
1> create table keystorage
2> (tablename varchar(25),
3> lastkey int)
4> with max_rows_per_page = 1
5> go


+Create unique clustered indexes on the tablename/entity name within
your keystorage table. This can only improve its performance. Remember
to set max_rows_per_page or the fillfactor on your clustered index, as
clustered indexes physically reorder the data.
+Break up the process into multiple transactions wherever possible;
this will reduce the amount of time any table lock is held and will
increase concurrency in high insertion environments.
+Use Stored Procedures: Put the SQL commands that update the
keystorage table and then insert the updated key value into a stored
procedure. Stored procedures are generally faster than individual SQL
statements in your code because procedures are pre-compiled and have
optimization plans for index usage stored in Sybase's system tables.
+Enhance the keystorage table to contain a fully qualified table name
as opposed to just the tablename. This can be done by adding fields to
the table definition or by just expanding the entity name varchar field
definition. Then place the keystorage table in a central location/
common database that applications share. This will eliminate multiple
keystorage tables but might add length to queries (since you have to do
cross-database queries to obtain the next key).

- There is an excellent discussion located in the whitepapers section
of Sybase's home page discussing the performance and tuning aspects of
any type of Sequential key use. It supplements the information here
(note: this page will open in a new browser window).

Back to start of 6.2.9

Back to top
-------------------------------------------------------------------------------

6.2.10 How can I execute dynamic SQL with ASE/SQL Server?

-------------------------------------------------------------------------------

Adaptive Server Enterprise: System 12

ASE 12 supports dynamic SQL, allowing the following:

declare @sqlstring varchar(255)
select @sqlstring = "select count(*) from master..sysobjects"
exec (@sqlstring)
go

Adaptive Server Enterprise: 11.5 and 11.9

There is a neat trick that was reported first by Bret Halford (br...@sybase.com
). (If anyone knows better, point me to the proof and I will change this!) It
utilises the CIS features of Sybase ASE.

*Firstly define your local server to be a remote server using
sp_addserver LOCALSRV,sql_server[,INTERFACENAME]
go

*Enable CIS
sp_configure "enable cis",1
go

*Finally, use sp_remotesql, sending the sql to the server defined in point
1.
declare @sqlstring varchar(255)
select @sqlstring = "select count(*) from master..sysobjects"
sp_remotesql LOCALSRV,@sqlstring
go

Remember to ensure that all of the databases referred to in the SQL string are
fully qualified since the call to sp_remotesql places you back in your default
database.

Sybase SQL Server (4.9.x, 10.x and 11.x before 11.5)

Before System 11.5 there was no real way to execute dynamic SQL. Rob Verschoor
has some very neat ideas that fills some of the gaps (http://www.euronet.nl/
~syp_rob/dynsql.html).

Dynamic Stored Procedure Execution

With System 10, Sybase introduced the ability to execute a stored procedure
dynamically.

declare @sqlstring varchar(255)
select @sqlstring = "sp_who"
exec @sqlstring
go

For some reason Sybase chose never to document this feature.

Obviously all of this is talking about executing dynamic SQL within the server
itself ie stored procedures and triggers. Dynamic SQL within client apps is a
different matter altogether.

David Owen

unread,
Jan 24, 2001, 5:14:15 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part6

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


Advanced ASE Administration



1.3.1. How do I clear a log suspend'd connection?
1.3.2. What's the best value for cschedspins?
1.3.3. What traceflags are available?
1.3.4. How do I use traceflags 5101 and 5102?
1.3.5. What is cmaxpktsz good for?
1.3.6. What do all the parameters of a buildmaster -d<device> -yall mean?
1.3.7. What is CIS and how do I use it?
1.3.8. If the master device is full how do I make the master database
bigger?

next prev ASE FAQ
-------------------------------------------------------------------------------

1.3.1 How to clear a log suspend

-------------------------------------------------------------------------------

A connection that is in a log suspend state is there because the transaction
that it was performing couldn't be logged. The reason it couldn't be logged is
because the database transaction log is full. Typically, the connection that
caused the log to fill is the one suspended. We'll get to that later.

In order to clear the problem you must dump the transaction log. This can be
done as follows:

dump tran db_name to data_device
go

At this point, any completed transactions will be flushed out to disk. If you
don't care about the recoverability of the database, you can issue the
following command:

dump tran db_name with truncate_only

If that doesn't work, you can use the with no_log option instead of the with
truncate_only.

After successfully clearing the log the suspended connection(s) will resume.

Unfortunately, as mentioned above, there is the situation where the connection
that is suspended is the culprit that filled the log. Remember that dumping the
log only clears out completed transaction. If the connection filled the log
with one large transaction, then dumping the log isn't going to clear the
suspension.

System 10

What you need to do is issue a SQL Server kill command on the connection and
then unsuspend it:

select lct_admin("unsuspend", db_id("db_name"))

System 11

See Sybase Technical News Volume 6, Number 2

Retaining Pre-System 10 Behavior

By setting a database's abort xact on log full option, pre-System 10 behavior
can be retained. That is, if a connection cannot log its transaction to the log
file, it is aborted by the SQL Server rather than suspended.

Return to top
-------------------------------------------------------------------------------

1.3.2 What's the best value for cschedspins?

-------------------------------------------------------------------------------

It is crucial to understand that cschedspins is a tunable parameter
(recommended values being between 1-2000) and the optimum value is completely
dependent on the customer's environment. cschedspins is used by the scheduler
only when it finds that there are no runnable tasks. If there are no runnable
tasks, the scheduler has two options:

1. Let the engine go to sleep (which is done by an OS call) for a specified
interval or until an event happens. This option assumes that tasks won't
become runnable because of tasks executing on other engines. This would
happen when the tasks are waiting for I/O more than any other resource such
as locks. Which means that we could free up the CPU resource (by going to
sleep) and let the system use it to expedite completion of system tasks
including I/O.
2. Go and look for a ready task again. This option assumes that a task would
become runnable in the near term and so incurring the extra cost of an OS
context switch through the OS sleep/wakeup mechanism is unacceptable. This
scenario assumes that tasks are waiting on resources such as locks, which
could free up because of tasks executing on other engines, more than they
wait for I/O.


cschedspins controls how many times we would choose option 2 before choosing
option 1. Setting cschedspins low favors option 1 and setting it high favors
option 2. Since an I/O intensive task mix fits in with option 1, setting
cschedspins low may be more beneficial. Similarly since a CPU intensive job mix
favors option 2, setting cschedspins high may be beneficial.

The consensus is that a single CPU server should have cschedspins set to 1.
However, I strongly recommend that users carefully test values for cschedspins
and monitor the results closely. I have seen more than one site that has shot
themselves in the foot so to speak due to changing this parameter in production
without a good understanding of their environment.

Return to top
-------------------------------------------------------------------------------

1.3.3 Trace Flag Definitions

-------------------------------------------------------------------------------

To activate trace flags, add them to the RUN_* script. The following example is
using the 1611 and 260 trace flags.


Use of these traceflags is not recommended by Sybase. Please use at your
own risk.

% cd ~sybase/install
% cat RUN_BLAND
#!/bin/sh
#
# SQL Server Information:
# name: BLAND
# master device: /usr/sybase/dbf/BLAND/master.dat
# master device size: 25600
# errorlog: /usr/sybase/install/errorlog_BLAND
# interfaces: /usr/sybase
#
/usr/sybase/dataserver -d/usr/sybase/dbf/BLAND/master.dat \
-sBLAND -e/usr/sybase/install/errorlog_BLAND -i/usr/sybase \
-T1611 -T260
-------------------------------------------------------------------------------


Trace Flags
+------+----------------------------------------------------------------------+
+------+----------------------------------------------------------------------+
| Flag | Description |
+------+----------------------------------------------------------------------+
| 200 | Displays messages about the before image of the query-tree. |
+------+----------------------------------------------------------------------+
| 201 | Displays messages about the after image of the query-tree. |
+------+----------------------------------------------------------------------+
| 241 | Compress all query-trees whenever the SQL dataserver is started. |
+------+----------------------------------------------------------------------+
| 260 | Reduce TDS (Tabular Data Stream) overhead in stored procedures. Turn |
| | off done-in-proc packets. Do not use this if your application is a |
| | ct-lib based application; it'll break. |
| | |
| | Why set this on? Glad you asked, typically with a db-lib application |
| | a packet is sent back to the client for each batch executed within a |
| | stored procedure. This can be taxing in a WAN/LAN environment. |
+------+----------------------------------------------------------------------+
| 299 | This trace flag instructs the dataserver to not recompile a child |
| | stored procedure that inherits a temp table from a parent procedure. |
+------+----------------------------------------------------------------------+
| 302 | Print information about the optimizer's index selection. |
+------+----------------------------------------------------------------------+
| 303 | Display OR strategy |
+------+----------------------------------------------------------------------+
| 304 | Revert special or optimizer strategy to that strategy used in |
| | pre-System 11 (this traceflag resolved several bug issues in System |
| | 11, most of these bugs are fixed in SQL Server 11.0.3.2) |
+------+----------------------------------------------------------------------+
| 310 | Print information about the optimizer's join selection. |
+------+----------------------------------------------------------------------+
| 311 | Display the expected IO to satisfy a query. Like statistics IO |
| | without actually executing. |
+------+----------------------------------------------------------------------+
| 317 | Provide extra optimization information. |
+------+----------------------------------------------------------------------+
| 319 | Reformatting strategies. |
+------+----------------------------------------------------------------------+
| 320 | Turn off the join order heuristic. |
+------+----------------------------------------------------------------------+
| 324 | Turn off the like optimization for ad-hoc queries using |
| | @local_variables. |
+------+----------------------------------------------------------------------+
| 602 | Prints out diagnostic information for deadlock prevention. |
+------+----------------------------------------------------------------------+
| 603 | Prints out diagnostic information when avoiding deadlock. |
+------+----------------------------------------------------------------------+
| 699 | Turn off transaction logging for the entire SQL dataserver. |
+------+----------------------------------------------------------------------+
| 1204 | Send deadlock detection to the errorlog. |
| * | |
+------+----------------------------------------------------------------------+
| 1205 | Stack trace on deadlock. |
+------+----------------------------------------------------------------------+
| 1206 | Disable lock promotion. |
+------+----------------------------------------------------------------------+
| 1603 | Use standard disk I/O (i.e. turn off asynchronous I/O). |
| * | |
+------+----------------------------------------------------------------------+
| 1605 | Start secondary engines by hand |
+------+----------------------------------------------------------------------+
| 1606 | Create a debug engine start file. This allows you to start up a |
| | debug engine which can access the server's shared memory for running |
| | diagnostics. I'm not sure how useful this is in a production |
| | environment as the debugger often brings down the server. I'm not |
| | sure if Sybase have ported the debug stuff to 10/11. Like most of |
| | their debug tools it started off quite strongly but was never |
| | developed. |
+------+----------------------------------------------------------------------+
| 1608 | Startup only engine 0; use dbcc engine("online") to incrementally |
| | bring up additional engines until the maximum number of configured |
| | engines. |
+------+----------------------------------------------------------------------+
| 1610 | Boot the SQL dataserver with TCP_NODELAY enabled. |
| * | |
+------+----------------------------------------------------------------------+
| 1611 | If possible, pin shared memory -- check errorlog for success/ |
| * | failure. |
+------+----------------------------------------------------------------------+
| 1613 | Set affinity of the SQL dataserver engine's onto particular CPUs -- |
| | usually pins engine 0 to processor 0, engine 1 to processor 1... |
+------+----------------------------------------------------------------------+
| 1615 | SGI only: turn on recoverability to filesystem devices. |
+------+----------------------------------------------------------------------+
| 1625 | Linux 11.9.2 only: Revert to using cached filesystem I/O. By |
| | default, ASE on Linux 11.9.2 opens filesystem devices using O_SYNC, |
| | unlike other Unix based releases, which means it is safe to use |
| | filesystems devices for production systems. |
+------+----------------------------------------------------------------------+
| 2512 | Prevent dbcc from checking syslogs. Useful when you are constantly |
| | getting spurious allocation errors. |
+------+----------------------------------------------------------------------+
| 3300 | Display each log record that is being processed during recovery. You |
| | may wish to redirect stdout because it can be a lot of information. |
+------+----------------------------------------------------------------------+
| 3500 | Disable checkpointing. |
+------+----------------------------------------------------------------------+
| 3502 | Track checkpointing of databases in errorlog. |
+------+----------------------------------------------------------------------+
| 3601 | Stack trace when error raised. |
+------+----------------------------------------------------------------------+
| 3604 | Send dbcc output to screen. |
+------+----------------------------------------------------------------------+
| 3605 | Send dbcc output to errorlog. |
+------+----------------------------------------------------------------------+
| 3607 | Do not recover any database, clear tempdb, or start up checkpoint |
| | process. |
+------+----------------------------------------------------------------------+
| 3608 | Recover master only. Do not clear tempdb or start up checkpoint |
| | process. |
+------+----------------------------------------------------------------------+
| 3609 | Recover all databases. Do not clear tempdb or start up checkpoint |
| | process. |
+------+----------------------------------------------------------------------+
| 3610 | Pre-System 10 behavior: divide by zero to result in NULL instead of |
| | error - also see Q6.2.5. |
+------+----------------------------------------------------------------------+
| 3620 | Do not kill infected processes. |
+------+----------------------------------------------------------------------+
| 4012 | Don't spawn chkptproc. |
+------+----------------------------------------------------------------------+
| 4013 | Place a record in the errorlog for each login to the dataserver. |
+------+----------------------------------------------------------------------+
| 4020 | Boot without recover. |
+------+----------------------------------------------------------------------+
| 5101 | Forces all I/O requests to go thru engine 0. This removes the |
| | contention between processors but could create a bottleneck if |
| | engine 0 becomes busy with non-I/O tasks. For more information... |
| | 5101/5102. |
+------+----------------------------------------------------------------------+
| 5102 | Prevents engine 0 from running any non-affinitied tasks. For more |
| | information...5101/5102. |
+------+----------------------------------------------------------------------+
| 7103 | Disable table lock promotion for text columns. |
+------+----------------------------------------------------------------------+
| 8203 | Display statement and transaction locks on a deadlock error. |
+------+----------------------------------------------------------------------+
| * | Starting with System 11 these are sp_configure'able |
+------+----------------------------------------------------------------------+

Return to top
-------------------------------------------------------------------------------

1.3.4 Trace Flags -- 5101 and 5102

-------------------------------------------------------------------------------

5101

Normally, each engine issues and checks for its own Disk I/O on behalf of the
tasks it runs. In completely symmetric operating systems, this behavior
provides maximum I/O throughput for SQL Server. Some operating systems are not
completely symmetric in their Disk I/O routines. For these environments, the
server can be booted with the 5101 trace flag. While tasks still request disk I
/O from any engine, the actual request to/from the OS is performed by engine 0.
The performance benefit comes from the reduced or eliminated contention on the
locking mechanism inside the OS kernel. To enable I/O affinity to engine 0,
start SQL Server with the 5101 Trace Flag.

Your errorlog will indicate the use of this option with the message:

Disk I/O affinitied to engine: 0

This trace flag only provides performance gains for servers with 3 or more
dataserver engines configured and being significantly utilized.

Use of this trace flag with fully symmetric operating systems will degrade
performance!

5102

The 5102 trace flag prevents engine 0 from running any non-affinitied tasks.
Normally, this forces engine 0 to perform Network I/O only. Applications with
heavy result set requirements (either large results or many connections issuing
short, fast requests) may benefit. This effectively eliminates the normal
latency for engine 0 to complete running its user thread before it issues the
network I/O to the underlying network transport driver. If used in conjunction
with the 5101 trace flag, engine 0 would perform all Disk I/O and Network I/O.
For environments with heavy disk and network I/O, engine 0 could easily
saturate when only the 5101 flag is in use. This flag allows engine 0 to
concentrate on I/O by not allowing it to run user tasks. To force task affinity
off engine 0, start SQL Server with the 5102 Trace Flag.

Your errorlog will indicate the use of this option with the message:

I/O only enabled for engine: 0
-------------------------------------------------------------------------------

Warning: Not supported by Sybase. Provided here for your enjoyment.

Return to top
-------------------------------------------------------------------------------

1.3.5 What is cmaxpktsz good for?

-------------------------------------------------------------------------------

cmaxpktsz corresponds to the parameter "maximum network packet size" which you
can see through sp_configure. I recommend only updating this value through
sp_configure. If some of your applications send or receive large amounts of
data across the network, these applications can achieve significant performance
improvement by using larger packet sizes. Two examples are large bulk copy
operations and applications reading or writing large text or image values.
Generally, you want to keep the value of default network packet size small for
users performing short queries, and allow users who send or receive large
volumes of data to request larger packet sizes by setting the maximum network
packet size configuration variable.

caddnetmem corresponds to the parameter "additional netmem" which you can see
through sp_configure. Again, I recommend only updating this value through
sp_configure. "additional netmem" sets the maximum size of additional memory
that can be used for network packets that are larger than SQL Server's default
packet size. The default value for additional netmem is 0, which means that no
extra space has been allocated for large packets. See the discussion below,
under maximum network packet size, for information on setting this
configuration variable. Memory allocated with additional netmem is added to the
memory allocated by memory. It does not affect other SQL Server memory uses.

SQL Server guarantees that every user connection will be able to log in at the
default packet size. If you increase maximum network packet size and additional
netmem remains set to 0, clients cannot use packet sizes that are larger than
the default size: all allocated network memory will be reserved for users at
the default size. In this situation, users who request a large packet size when
they log in receive a warning message telling them that their application will
use the default size. To determine the value for additional netmem if your
applications use larger packet sizes:

*Estimate the number of simultaneous users who will request the large
packet sizes, and the sizes their applications will request.
*Multiply this sum by three, since each connection needs three buffers.
*Add 2% for overhead, rounded up to the next multiple of 512

Return to top
-------------------------------------------------------------------------------

1.3.6 Buildmaster Configuration Definitions

-------------------------------------------------------------------------------


Attention! Please notice, be very careful with these parameters. Use only
at your own risk. Be sure to have a copy of the original parameters. Be
sure to have a dump of all dbs (include master) handy.

Since the release of 11.0, there is a lot less need for buildmaster to
configure parameters. Check sp_configure and/or SERVERNAME.cfg to see if
the configuration parameter is there before using buildmaster.

-------------------------------------------------------------------------------

The following is a list of configuration parameters and their effect on the SQL
Server. Changes to these parameters can affect performance of the server.
Sybase does not recommend modifying these parameters without first discussing
the change with Sybase Tech Support. This list is provided for information
only.

These are categorized into two kinds:

*Configurable through sp_configure and
*not configurable but can be changed through 'buildmaster -y<variable>=
value -d<dbdevice>'


Configurable variables:


crecinterval:

The recovery interval specified in minutes.

ccatalogupdates:

A flag to inform whether system catalogs can be updated or not.

cusrconnections:
This is the number of user connections allowed in SQL
Server. This value + 3 (one for checkpoint, network
and mirror handlers) make the number of pss configured
in the server.
-------------------------------------------------------------------------------

cfgpss:
Number of PSS configured in the server. This value will
always be 3 more than cusrconnections. The reason is we
need PSS for checkpoint, network and mirror handlers.

THIS IS NOT CONFIGURABLE.
-------------------------------------------------------------------------------

cmemsize:
The total memory configured for the Server in 2k
units. This is the memory the server will use for both
Server and Kernel Structures. For Stratus or any 4k
pagesize implementation of SQL Server, certain values
will change as appropriate.

cdbnum:
This is the number of databases that can be open in SQL
Server at any given time.

clocknum:
Variable that defines and controls the number of logical
locks configured in the system.

cdesnum:
This is the number of open objects that can be open at
a given point of time.

cpcacheprcnt:
This is the percentage of cache that should be used
for procedures to be cached in.

cfillfactor:

Fill factor for indexes.

ctimeslice:
This value is in units of milli-seconds. This value determines
how much time a task is allowed to run before it yields.
This value is internally converted to ticks. See below
the explanations for cclkrate, ctimemax etc.

ccrdatabasesize:
The default size of the database when it is created.
This value is Megabytes and the default is 2Meg.

ctappreten:

An outdated not used variable.

crecoveryflags:
A toggle flag which will display certain recovery information
during database recoveries.

cserialno:
An informational variable that stores the serial number
of the product.

cnestedtriggers:

Flag that controls whether nested triggers allowed or not.

cnvdisks:
Variable that controls the number of device structures
that are allocated which affects the number of devices
that can be opened during server boot up. If user
defined 20 devices and this value is configured to be
10, during recovery only 10 devices will be opened and
the rest will get errors.
cfgsitebuf:
This variable controls maximum number of site handler
structures that will be allocated. This in turn
controls the number of site handlers that can be
active at a given instance.
cfgrembufs:
This variable controls the number of remote buffers
that needs to send and receive from remote sites.
Actually this value should be set to number of
logical connections configured. (See below)
cfglogconn:
This is the number of logical connections that can
be open at any instance. This value controls
the number of resource structure allocated and
hence it will affect the overall logical connection
combined with different sites. THIS IS NOT PER SITE.

cfgdatabuf:
Maximum number of pre-read packets per logical connections.
If logical connection is set to 10, and cfgdatabuf is set
to 3 then the number of resources allocated will be
30.

cfupgradeversion:

Version number of last upgrade program ran on this server.

csortord:

Sort order of the SQL Server.

cold_sortdord:
When sort orders are changed the old sort order is
saved in this variable to be used during recovery
of the database after the Server is rebooted with
the sort order change.

ccharset:

Character Set used by the SQL server

cold_charset:
Same as cold_sortord except it stores the previous
Character Set.
-------------------------------------------------------------------------------

cdflt_sortord:
page # of sort order image definition. This should
not be changed at any point. This is a server only
variable.

cdflt_charset:
page # of character set image definition. This should
not be changed at any point. This is a server only
variable.

cold_dflt_sortord:
page # of previous sort order image definition. This
should not be changed at any point. This is a server
only variable.

cold_dflt_charset:
page # of previous chracter set image definition. This
should not be changed at any point. This is a server
only variable.
-------------------------------------------------------------------------------

cdeflang:

Default language used by SQL Server.

cmaxonline:
Maximum number of engines that can be made online. This
number should not be more than the # of cpus available on this
system. On Single CPU system like RS6000 this value is always
1.

cminonline:
Minimum number of engines that should be online. This is 1 by
default.

cengadjinterval:

A noop variable at this time.

cfgstacksz:
Stack size per task configured. This doesn't include the guard
area of the stack space. The guard area can be altered through
cguardsz.
-------------------------------------------------------------------------------

cguardsz:
This is the size of the guard area. The Sql Server will
allocate stack space for each task by adding cfgstacksz
(configurable through sp_configure) and cguardsz (default is
2K). This has to be a multiple of PAGESIZE which will be 2k
or 4k depending on the implementation.

cstacksz:
Size of fixed stack space allocated per task including the
guard area.
-------------------------------------------------------------------------------

Non-configurable values :
-------------------------------------------------------------------------------

TIMESLICE, CTIMEMAX ETC:
-------------------------------------------------------------------------------

1 millisecond = 1/1000th of a second.
1 microsecond = 1/1000000th of a second. "Tick" : Interval between two clock
interrupts occur in real time.

"cclkrate" :

A value specified in microsecond units.
Normally on systems where a fine grained timer is not available
or if the Operating System cannot set sub-second alarms, this
value is set to 1000000 milliseconds which is 1 second. In
other words an alarm will go off every 1 second or you will
get 1 tick per second.

On Sun4 this is set to 100000 milliseconds which will result in
an interrupt going at 1/10th of a second. You will get 6 ticks
per second.

"avetimeslice" :

A value specified in millisecond units.
This is the value given in "sp_configure",<timeslice value>.
Otherwise the milliseconds are converted to milliseconds and
finally to tick values.

ticks = <avetimeslice> * 1000 / cclkrate.

"timeslice" :
-------------------------------------------------------------------------------
The unit of this variable is in ticks.
This value is derived from "avetimeslice". If "avetimeslice"
is less than 1000 milliseconds then timeslice is set to 1 tick.

"ctimemax" :


The unit of this variable is in ticks.
A task is considered in infinite loop if the consumed ticks
for a particular task is greater than ctimemax value. This
is when you get timeslice -201 or -1501 errors.

"cschedspins" :


For more information see Q1.3.2.
This value alters the behavior of the SQL Server scheduler.
The scheduler will either run a qualified task or look
for I/O completion or sleep for a while before it can
do anything useful.

The cschedspins value determines how often the scheduler
will sleep and not how long it will sleep. A low value
will be suited for a I/O bound SQL Server but a
high value will be suited for CPU bound SQL Server. Since
the SQL Server will be used in a mixed mode, this value
need to be fined tuned.

Based on practical behavior in the field, a single engine
SQL Server should have cschedspins set to 1 and a multi-engine
server should have set to 2000.

Now that we've defined the units of these variables what happens when we change
cclkrate ?

Assume we have a cclkrate=100000.

A clock interrupt will occur every (100000/1000000) 1/10th milliseconds.
Assuming a task started with 1 tick which can go upto "ctimemax=1500" ticks can
potentially take 1/10us * (1500 + 1) ticks which will be 150 milliseconds or
approx. .15 milliseconds per task.

Now changing the cclkrate to 75000

A clock interrupt will occur every (75000/1000000) 1/7th milliseconds. Assuming
a task started with 1 tick which can go upto ctimemax=1500 ticks can
potentially take 1/7us * (1500 + 1) ticks which will be 112 milliseconds or
approx. .11 milliseconds per task.

Decreasing the cclkrate value will decrease the time spent on each task. If the
task couldnot voluntarily yield within the time, the scheduler will kill the
task.

UNDER NO CIRCUMSTANCES the cclkrate value should be changed. The default
ctimemax value should be set to 1500. This is an empirical value and this can
be changed under special circumstances and strictly under the guidance of DSE.
-------------------------------------------------------------------------------

cfgdbname:
Name of the master device is saved here. This is 64
bytes in length.

cfgpss:
This is a derived value from cusrconnections + 3.
See cusrconnections above.

cfgxdes:
This value defines the number of transactions that
can be done by a task at a given instance.
Changing this value to be more than 32 will have no
effect on the server.
cfgsdes:
This value defines the number of open tables per
task. This will be typically for a query. This
will be the number of tables specified in a query
including subqueries.

Sybase Advises not to change this value. There
will be significant change in the size of per user
resource in SQL Server.

cfgbuf:
This is a derived variable based on the total
memory configured and subtracting different resource
sizes for Databases, Objects, Locks and other
Kernel memories.

cfgdes:
This is same as cdesnum. Other values will have no effect on it.

cfgprocedure:
This is a derived value. Based on cpcacheprcnt variable.

cfglocks:
This is same as clocknum. Other values will have no effect on it.

cfgcprot:
This is variable that defines the number of cache protectors per
task. This is used internally by the SQL Server.

Sybase advise not to modify this value as a default of 15 will
be more than sufficient.

cnproc:
This is a derived value based on cusrconnections + <extra> for
Sybase internal tasks that are both visible and non-visible.

cnmemmap:
This is an internal variable that will keep track of SQL Server
memory.

Modifying this value will not have any effect.

cnmbox:
Number of mail box structures that need to be allocated.
More used in VMS environment than UNIX environment.

cnmsg:
Used in tandem with cnmbox.

cnmsgmax:
Maximum number of messages that can be passed between mailboxes.

cnblkio:
Number of disk I/O request (async and direct) that can be
processed at a given instance. This is a global value for all
the engines and not per engine value.

This value is directly depended on the number of I/O request
that can be processed by the Operating System. It varies
depending on the Operating System.

cnblkmax:
Maximum number of I/O request that can be processed at any given
time.

Normally cnblkio,cnblkmax and cnmaxaio_server should be the same.

cnmaxaio_engine:
Maximum number of I/O request that can be processed by one engine.
Since engines are Operating System Process, if there is any limit
imposed by the Operating System on a per process basis then
this value should be set. Otherwise it is a noop.

cnmaxaio_server:
This is the total number of I/O request the SQL Server can do.
This value s directly depended on the number of I/O request
that can be processed by the Operating System. It varies
depending on the Operating System.

csiocnt:
not used.

cnbytio:
Similar to disk I/O request, this is for network I/O request.
This includes disk/tape dumps also. This value is for
the whole SQL Server including other engines.

cnbytmax:
Maximum number of network I/O request including disk/tape dumps.

cnalarm:
Maximum number of alarms including the alarms used by
the system. This is typically used when users do "waitfor delay"
commands.

cfgmastmirror:
Mirror device name for the master device.

cfgmastmirror_stat:
Status of mirror devices for the master device like serial/dynamic
mirroring etc.

cindextrips:
This value determines the aging of a index buffer before it
is removed from the cache.

coamtrips:
This value determines the aging of a OAM buffer before it
is removed from the cache.

cpreallocext:
This value determines the number of extents that will be
allocated while doing BCP.

cbufwashsize:
This value determines when to flush buffers in the cache
that are modified.

Return to top
-------------------------------------------------------------------------------

1.3.7: What is CIS and how can I use it?

-------------------------------------------------------------------------------

CIS is the new name for Omni SQL Server. The biggest difference is that CIS is
included with Adaptive Server Enterprise as standard. Actually, this is not
completely accurate; the ability to connect to other ASEs and SQL Servers,
including Microsoft's, is included as standard. If you need to connect to DB2
or Oracle you have to obtain an additional licence.

So, what is it?

CIS is a means of connecting two servers together so that seamless cross-server
joins can be executed. It is not just restricted to selects, pretty much any
operation that can be performed on a local table can also be performed on a
remote table. This includes dropping it, so be careful!

What servers can I connect to?

*Sybase ASE and SQL Servers
*Microsoft SQL Server
*IBM DB2
*Oracle

What are the catches?

Well, nothing truly comes for free. CIS is not a means of providing true load
sharing, although you will find nothing explicitly in the documentation to tell
you this. Obviously there is a performance hit which seems to affect cursors
worst of all. CIS itself is implemented using cursors and this may be part of
the explanation.

OK, so how do I use it?

Easy! Add the remote server using sp_addserver. Make sure that you define it
as type sql_server or ASEnterprise. Create an "existing" table using the
definition of the remote table. Update statistics on this new "existing"
table. Then simply use it in joins exactly as if it were a local table.

Back to top
-------------------------------------------------------------------------------

1.3.8: If the master device is full, how do I make the master database bigger?

-------------------------------------------------------------------------------

It is not possible to extend the master database across another device, so the
following from Eric McGrane (mcg...@sybase.com) from Sybase Product Support
Engineering should help.

*dump the current master database

*buildmaster a new master device with a larger size

*start the server in single user mode using the new master device

*login to the server and execute the following tsql:


select * from sysdevices
*take note of the high value

*load the dump of the master you had just taken

*restart the server (as it will be shut down when master is done loading),
again
in single user mode so that you can update system tables

*login to the server and update sysdevices setting high for master to the
value
that you noted previously

*shut the server down and start it back up, but this time not in single
user mode.

The end result of the above is that you will now have a larger master device
and you can alter your master database to be a larger size. For details about
starting the server in single user mode and how to use buildmaster (if you need
the details) please refer to the documentation.

David Owen

unread,
Jan 24, 2001, 5:14:17 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part16

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.


Freeware


The best place to search for Sybase freeware is Ed Barlow (sql...@tiac.net)'s
site (http://www.edbarlow.com). He is likely to spend more time maintaining
his list than I will spend on this. I will do my best!

next prev ASE FAQ
-------------------------------------------------------------------------------

9.1: sp_freedevice

-------------------------------------------------------------------------------
use master
go

drop proc sp_freedevice
go

create proc sp_freedevice
@devname char(30) = null
as

declare @showdev bit
declare @alloc int

if @devname = null
select @devname = "%"
,@showdev = 0
else
select @showdev = 1

select @alloc = low
from master.dbo.spt_values
where type = "E"
and number = 1

create table #freedev
(
name char(30)
,size float
,used float
)

insert #freedev
select dev.name
,((dev.high - dev.low) * @alloc + 500000) / 1048576
,sum((usg.size * @alloc + 500000) / 1048576)
from master.dbo.sysdevices dev
,master.dbo.sysusages usg
where dev.low <= usg.size + usg.vstart - 1
and dev.high >= usg.size + usg.vstart - 1
and dev.cntrltype = 0
group by dev.name

insert #freedev
select name
,((high - low) * @alloc + 500000) / 1048576
0
from master.dbo.sysdevices sd
where cntrltype = 0
and not exists (select 1
from #freedev fd
where fd.name = sd.name)

if @showdev = 1
begin
select devname = dev.name
,size = convert(varchar(10),f.size) + " MB"
,used = convert(varchar(10),f.used) + " MB"
,free = convert(varchar(10),f.size - f.used) + " MB"
from master.dbo.sysdevices dev, #freedev f
where dev.name = f.name
and dev.name like @devname

select dbase = db.name
,size = convert(varchar(10),
(usg.size * @alloc + 500000) / 1048576
) + " MB"
,usage = vl.name
from master.dbo.sysdatabases db
,master.dbo.sysusages usg
,master.dbo.sysdevices dev
,master.dbo.spt_values vl
where db.dbid = usg.dbid
and usg.segmap = vl.number
and dev.low <= usg.size + usg.vstart - 1
and dev.high >= usg.size + usg.vstart - 1
and dev.status & 2 = 2
and vl.type = "S"
and dev.name = @devname
end
else
begin

select total = convert(varchar(10), sum(size)) + " MB"
,used = convert(varchar(10), sum(used)) + " MB"
,free = convert(varchar(10), sum(size) - sum(used)) + " MB"
from #freedev

select devname = dev.name
,size = convert(varchar(10), f.size) + " MB"
,used = convert(varchar(10), f.used) + " MB"
,free = convert(varchar(10), f.size - f.used) + " MB"
from master.dbo.sysdevices dev
,#freedev f
where dev.name = f.name
end
go
grant execute on sp_freedevice to public
go

Back to top
-------------------------------------------------------------------------------

9.2: sp_whodo

-------------------------------------------------------------------------------

Sybase System 10.x and above

use master
go

if object_id('sp_whodo') is not null
begin
drop procedure sp_whodo
if object_id('sp_whodo') is not null
print '<<< Failed to drop procedure sp_whodo >>>'
else
print '<<< Dropped procedure sp_whodo >>>'
end
go

create procedure sp_whodo @loginame varchar(30) = NULL
as

declare @low int
,@high int
,@spidlow int
,@spidhigh int

select @low = 0
,@high = 32767
,@spidlow = 0
,@spidhigh = 32767

if @loginame is not NULL
begin
select @low = suser_id(@loginame)
,@high = suser_id(@loginame)

if @low is NULL
begin
if @loginame like "[0-9]%"
begin
select @spidlow = convert(int, @loginame)
,@spidhigh = convert(int, @loginame)
,@low = 0
,@high = 32767
end
else
begin
print "Login %1! does not exist.", @loginame
return (1)
end
end
end

select spid
,status,
,substring(suser_name(suid),1,12) loginame
,hostname
,convert(char(3), blocked) blk
,convert(char(7), isnull(time_blocked, 0)) blk_sec
,convert(char(16), program_name) program
,convert(char(7), db_name(dbid)) dbname
,convert(char(16), cmd) cmd
,convert(char(6), cpu) cpu
,convert(char(7), physical_io) io
,convert(char(16), isnull(tran_name, "")) tran_name
from master..sysprocesses
where suid >= @low
and suid <= @high
and spid>= @spidlow
and spid <= @spidhigh

return (0)

go
if object_id('sp_whodo') is not null
begin
print '<<< Created procedure sp_whodo >>>'
grant execute on sp_whodo to public
else
print '<<< Failed to create procedure sp_whodo >>>'
end
go

Sybase 4.x

Does the same as the previous version, but reports less information.

(Does anybody still use 4.x? Can I remove this?)
use master
go

if object_id('sp_whodo') is not null
begin
drop procedure sp_whodo
if object_id('sp_whodo') is not null
print '<<< Failed to drop procedure sp_whodo >>>'
else
print '<<< Dropped procedure sp_whodo >>>'
end
go

create procedure sp_whodo @loginame varchar(30) = NULL
as

declare @low int
,@high int
,@spidlow int
,@spidhigh int

select @low = 0
,@high = 32767
,@spidlow = 0
,@spidhigh = 32767
if @loginame is not NULL
begin

select @low = suser_id(@loginame)
,@high = suser_id(@loginame)

if @low is NULL
begin
if @loginame like "[0-9]%"
begin
select @spidlow = convert(int, @loginame)
,@spidhigh = convert(int, @loginame)
,@low = 0
,@high = 32767
end
else
begin
print "No login exists with the supplied name."
return (1)
end
end
end

select
spid
,status
,substring(suser_name(suid),1,12) loginame
,hostname
,convert(char(3), blocked) blk
,convert(char(16), program_name) program
,convert(char(7), db_name(dbid)) dbname
,convert(char(16), cmd) cmd
,convert(char(6), cpu) cpu
,convert(char(7), physical_io) io
from master..sysprocesses
where suid >= @low
and suid <= @high
and spid >= @spidlow
and spid <= @spidhigh

return (0)
go

if object_id('sp_whodo') is not null
begin
print '<<< Created procedure sp_whodo >>>'
grant execute on sp_whodo to public
else
print '<<< Failed to create procedure sp_whodo >>>'
end
go

Back to top
-------------------------------------------------------------------------------

9.3: Generating dump/load database command.

-------------------------------------------------------------------------------
#!/bin/sh

#
# This script calls the function gen_dumpload_command to generate
# either a dump or a load command.
#
# This function works for both System 10 and Sybase 4.x
# installations. You simply need to change your method of thinking.
# In Sybase 4.x, we only had a single stripe. In System 10, most
# of the time we define a single stripe but in our bigger databases
# we define more stripes.
#
# Therefore, everything is a stripe. Whether we use one stripe or
# many... cool? Right on!
#
#
# The function gen_dumpload_command assumes that all dump devices
# adhere to the following naming convention:
#
# stripe_NN_database
#
# NOTE: If your shop is different search for "stripe" and replace
# with your shop's value.
#
#


# gen_dumpload_command():
#
# purpose: to generate a dump/load to/from command based on
# what is defined in sysdevices. The environment
# variable D_DEV is set.
#
# return: zero on success, non-zero on failure.
#
# sets var: D_DEV is set with the actual dump/load command;
# stripe devices are also handled.
#
# calls: *none*
#
# parms: 1 = DSQUERY
# 2 = PASSWD
# 3 = DB
# 4 = CMD -> "dump" or "load"
#


gen_dumpload_command()
{
LOCAL_DSQUERY=$1
LOCAL_PASSWD=$2
DB_TO_AFFECT=$3
CMD=$4 # dump/load

if [ "$CMD" = "dump" ] ; then
VIA="to"
else
VIA="from"
fi

# Check for a dump device

echo "Checking for standard $CMD device"
D_DEV=`echo "$LOCAL_PASSWD
select name from sysdevices where name like \"stripe%_$DB_TO_AFFECT\"
go" | $SYBIN/isql -U sa -S $LOCAL_DSQUERY -w1000 | sed -n -e '/stripe/p' | \
nawk '{ if (NR == 1) print "'$CMD' database '$DB_TO_AFFECT' '$VIA'", $0
else print "stripe on", $0
}'`

if [ -z "$D_DEV" ] ; then # nothing defined... :(
return 1
fi

return 0
}

SYBIN=$SYBASE/bin

gen_dumpload_command MAG_LOAD_2 thissux wcid "dump"

if [ $? -eq 1 ] ; then
echo "Error..."
fi

# so what does this generate? :-)
echo $D_DEV

# ... and it can be used as follows:

echo "$PASSWD
$D_DEV
go" | isql ...

exit 0

Back to top
-------------------------------------------------------------------------------

9.4: SybPerl FAQ


This is Michael Peppler's mpep...@peppler.org FAQ.
-------------------------------------------------------------------------------

http://www.mbay.net/~mpeppler/Sybperl/sybperl-faq.html

Back to top
-------------------------------------------------------------------------------

9.5: dbschema.pl

-------------------------------------------------------------------------------

dbschema.pl is a script that will extract the schema (everything from the
server definition down to table permissions etc) from ASE/SQL Server. It was
initially developed by Michael Peppler but currently maintained by me (David
Owen do...@midsomer.org) The script is written using Sybperl and was
originally distributed solely as part of that package. The latest copy can be
got from ftp://ftp.midsomer.org/pub/dbschema.pl.

Back to top
-------------------------------------------------------------------------------

9.6: Sybtcl FAQ


This is Tom Poindexter http://www.nyx.net/~tpoindex/ FAQ.
-------------------------------------------------------------------------------

Index of Sections

*Overview
*The enabling language platform
*Design and commands
*Applications
*Information Sources
*Download
*About the Author

-------------------------------------------------------------------------------

Overview

Sybtcl is an extension to Tcl (Tool Command Language) that allows Tcl programs
to access Sybase databases. Sybtcl adds additional Tcl commands to login to a
Sybase server, send SQL statements, retrieve result sets, execute stored
procedures, etc. Sybtcl simplifies Sybase programming by creating a high level
interface on top of DB-Library. Sybtcl can be used to program a wide variety of
applications, from system administration procedures to end-user applications.

Sybtcl runs on Unix, Windows NT and 95, and Macintosh platforms.
-------------------------------------------------------------------------------

The enabling language platform

Tool Command Language, often abbreviated "Tcl" and pronounced as "tickle", was
created by Dr. John Ousterhout at the University of California-Berkeley. Tcl is
an interpreted script language, similar to Unix shell, Awk, Perl, and others.
Tcl was designed to be easily extended, where new commands are added to the
base interpreter to provide additional functionality. Core Tcl commands contain
all of the usual constructs provided by most programming languages: setting and
accessing variables, file read/write, if-then-else, do-while, function calls.
Tcl also contains many productivity enhancing commands: list manipulation,
associative arrays, and regular expression processing.

Tcl has several features that make it a highly productive language. First, the
language is interpreted. Interpreters allow execution without a compile and
link step. Code can be developed with immediate feedback. Second, Tcl has a
single data type: string. While this might at first glance seem to a
deficiency, it avoids problems of data conversion and memory management. (This
feature doesn't preclude Tcl from performing arithmetic operations.) Last, Tcl
has a consistent and simple syntax, much the same as the Unix shell. Every Tcl
statement is a command name, followed by arguments.

Dr. Ousterhout also developed a companion Tcl extension, called Tk. Tk provides
simplified programming of X11 applications with a Motif look and feel. X11
applications can be programmed with 60%-80% less code than equivalent Xt,
Motif, or Xview programs using C or C++.

Dr. Ousterhout now leads Tcl/Tk development at Sun Microsystems.
-------------------------------------------------------------------------------

Design and commands

Sybtcl was designed to fill the gap between pure applications development tools
(e.g. Apt, Powerbuilder, et.al.) and database administration tools, often Unix
shell scripts consisting of 'isql' and Awk pipelines. Sybtcl extends the Tcl
language with specialized commands for Sybase access. Sybtcl consists of a set
of C language functions that interface DB-Library calls to the Tcl language.

Instead of a simple one-to-one interface to DB-Library, Sybtcl provides a
high-level Sybase programming interface of its own. The following example is a
complete Sybtcl program that illustrates the simplified interface. It relies on
the Tcl interpreter, "tclsh", that has been extended with Sybtcl.
#!/usr/local/bin/tclsh
set hand [sybconnect "mysybid" "mysybpasswd"]
sybuse $hand pubs2
sybsql $hand "select au_lname, au_fname from authors order by au_lname"
sybnext $hand {
puts [format "%s, %s" @1 @2]
}
sybclose $hand
exit

In this example, a Sybase server connection is established ("sybconnect"), and
the "pubs" sample database is accessed ("sybuse"). An SQL statement is sent to
the server ("sybsql"), and all rows returned are fetched and printed
("sybnext"). Finally, the connection is closed ("sybclose").

The same program can be made to display its output in an X11 window, with a few
changes. The Tcl/Tk windowing shell, "wish", also extended with Sybtcl is used.
#!/usr/local/bin/wish
listbox .sql_output
button .exit -text exit -command exit
pack .sql_output .exit
set hand [sybconnect "mysybid" "mysybpasswd"]
sybuse $hand pubs2
sybsql $hand "select au_lname, au_fname from authors order by au_lname"
sybnext $hand {
.sql_output insert end [format "%s, %s" @1 @2]
}
sybclose $hand

In addition to these commands, Sybtcl includes commands to access return column
names and datatypes ("sybcols"), return values from stored procedures
("sybretval"), reading and writing of "text" or "image" columns ("sybreadtext",
"sybwritetext"), canceling pending results ("sybcancel"), and polling
asynchronous SQL execution ("sybpoll").

Full access to Sybase server messages is also provided. Sybtcl maintains a Tcl
array variable which contains server messages, output from stored procedures
("print"), DB-Library and OS error message.
-------------------------------------------------------------------------------

Applications

The Sybtcl distribution includes "Wisqlite", an X11 SQL command processor.
Wisqlite provides a typical windowing style environment to enter and edit SQL
statements, list results of the SQL execution in a scrollable listbox, save or
print output. In addition, menu access to the Sybase data dictionary is
provided, listing tables in a database, the column names and datatypes of a
table, text of stored procedures and triggers.

For a snapshot of Wisqlite in action, look here.

Other applications included in the Sybtcl distribution include:

*a simple graphical performance monitor
*a version of "sp_who", with periodic refresh

Sybtcl users have reported a wide variety of applications written in Sybtcl,
ranging from end user applications to database administration utilities.
-------------------------------------------------------------------------------

Information Sources

Sybtcl is extensively documented in "Tcl/Tk Tools", edited by Mark Harrison,
published by O'Reilly and Associates, 1997, ISBN: 1-56592-218-2.

Tcl/Tk is described in detail in "Tcl and the Tk Toolkit" by Dr. John
Ousterhout, Addison-Wesley Publishing 1994 ISBN: 0-201-63337-X . Another recent
publication is "Practical Programming in Tcl and Tk" by Brent Welch, Prentice
Hall 1995 ISBN 0-13-182007-9.

A wealth of information on Tcl/Tk is available via Internet sources:


news:comp.lang.tcl
http://www.neosoft.com/tcl/
http://www.sco.com/Technology/tcl/Tcl.html
ftp://ftp.neosoft.com/pub/tcl/

-------------------------------------------------------------------------------

Download

Download Sybtcl in tar.gz format for Unix.
Download Sybtcl in zip format for Windows NT and 95.

Tcl/Tk and Sybtcl are both released in source code form under a "BSD" style
license. Tcl/Tk and Sybtcl may be freely used for any purpose, as long as
copyright credit is given to the respective owners. Tcl/Tk can be obtained from
either anonymous FTP site listed above.

Tcl/Tk and Sybtcl can be easily configured under most modern Unix systems
including SunOS, Solaris, HP-UX, Irix, OSF/1, AIX, SCO, et.al. Sybtcl also runs
under Windows NT and 95; pre-compiled DLL's are include in the distribution.
Sybtcl requires Sybase's DB-Library, from Sybase's Open Client bundle.

Current versions are:

*Sybtcl 2.5: released January 8, 1998
*Tcl 8.0: released August 13, 1997
*Tk 8.0: released August 13, 1997

The Internet newsgroup comp.lang.tcl is the focal point for support. The group
is regularly read by developers and users alike. Authors may also be reached
via email. Sun has committed to keeping Tcl/Tk as freely available software.
-------------------------------------------------------------------------------

About the Author

Tom Poindexter is a consultant with expertise in Unix, relational databases,
systems and application programming. He holds a B.S. degree from the University
of Missouri, and an M.B.A. degree from Illinois State University. He can be
reached at tpoi...@nyx.net.

Back to top
-------------------------------------------------------------------------------

9.7: Extended Stored Procedures

-------------------------------------------------------------------------------

The following stored procedures were written by Ed Barlow sql...@tiac.net and
can be fetched from the following site:


http://www.edbarlow.com

Here's a pseudo-man page of what you get:
Modified Sybase Procedures
+-----------------+-------------------------------------------+
+-----------------+-------------------------------------------+
| Command | Description |
+-----------------+-------------------------------------------+
| sp__help | Better sp_help |
+-----------------+-------------------------------------------+
| sp__helpdb | Database Information |
+-----------------+-------------------------------------------+
| sp__helpdevice | Break down database devices into a nice |
| | report |
+-----------------+-------------------------------------------+
| sp__helpgroup | List groups in database by access level |
+-----------------+-------------------------------------------+
| sp__helpindex | Shows indexes by table |
+-----------------+-------------------------------------------+
| sp__helpsegment | Segment Information |
+-----------------+-------------------------------------------+
| sp__helpuser | Lists users in current database by group |
| | (include aliases) |
+-----------------+-------------------------------------------+
| sp__lock | Lock information |
+-----------------+-------------------------------------------+
| sp__who | sp_who that fits on a page |
+-----------------+-------------------------------------------+
Audit Procedures
+-------------------+-----------------------------------------+
+-------------------+-----------------------------------------+
| Command | Description |
+-------------------+-----------------------------------------+
| sp__auditsecurity | Security Audit On Server |
+-------------------+-----------------------------------------+
| sp__auditdb | Audit Current Database For Potential |
| | Problems |
+-------------------+-----------------------------------------+
System Administrator Procedures
+----------------+--------------------------------------------+
+----------------+--------------------------------------------+
| Command | Description |
+----------------+--------------------------------------------+
| sp__block | Blocking processes. |
+----------------+--------------------------------------------+
| sp__dbspace | Summary of current database space |
| | information. |
+----------------+--------------------------------------------+
| sp__dumpdevice | Listing of Dump devices |
+----------------+--------------------------------------------+
| sp__helpdbdev | Show how Databases use Devices |
+----------------+--------------------------------------------+
| sp__helplogin | Show logins and remote logins to server |
+----------------+--------------------------------------------+
| sp__helpmirror | Shows mirror information, discover broken |
| | mirrors |
+----------------+--------------------------------------------+
| sp__segment | Segment Information |
+----------------+--------------------------------------------+
| sp__server | Server summary report (very useful) |
+----------------+--------------------------------------------+
| sp__vdevno | Who's who in the device world |
+----------------+--------------------------------------------+
DBA Procedures
+-----------------+-------------------------------------------+
+-----------------+-------------------------------------------+
| Command | Description |
+-----------------+-------------------------------------------+
| sp__badindex | give information about bad indexes |
| | (nulls, bad statistics...) |
+-----------------+-------------------------------------------+
| sp__collist | list all columns in database |
+-----------------+-------------------------------------------+
| sp__indexspace | Space used by indexes in database |
+-----------------+-------------------------------------------+
| sp__noindex | list of tables without indexes. |
+-----------------+-------------------------------------------+
| sp__helpcolumns | show columns for given table |
+-----------------+-------------------------------------------+
| sp__helpdefault | list defaults (part of objectlist) |
+-----------------+-------------------------------------------+
| sp__helpobject | list objects |
+-----------------+-------------------------------------------+
| sp__helpproc | list procs (part of objectlist) |
+-----------------+-------------------------------------------+
| sp__helprule | list rules (part of objectlist) |
+-----------------+-------------------------------------------+
| sp__helptable | list tables (part of objectlist) |
+-----------------+-------------------------------------------+
| sp__helptrigger | list triggers (part of objectlist) |
+-----------------+-------------------------------------------+
| sp__helpview | list views (part of objectlist) |
+-----------------+-------------------------------------------+
| sp__trigger | Useful synopsis report of current |
| | database trigger schema |
+-----------------+-------------------------------------------+
Reverse Engineering
+------------------+------------------------------------------+
+------------------+------------------------------------------+
| Command | Description |
+------------------+------------------------------------------+
| sp__revalias | get alias script for current db |
+------------------+------------------------------------------+
| sp__revdb | get db creation script for server |
+------------------+------------------------------------------+
| sp__revdevice | get device creation script |
+------------------+------------------------------------------+
| sp__revgroup | get group script for current db |
+------------------+------------------------------------------+
| sp__revindex | get indexes script for current db |
+------------------+------------------------------------------+
| sp__revlogin | get logins script for server |
+------------------+------------------------------------------+
| sp__revmirror | get mirroring script for server |
+------------------+------------------------------------------+
| sp__revuser | get user script for current db |
+------------------+------------------------------------------+
Other Procedures
+----------------+--------------------------------------------+
+----------------+--------------------------------------------+
| Command | Description |
+----------------+--------------------------------------------+
| sp__bcp | Create unix script to bcp in/out database |
+----------------+--------------------------------------------+
| sp__date | Who can remember all the date styles? |
+----------------+--------------------------------------------+
| sp__quickstats | Quick dump of server summary information |
+----------------+--------------------------------------------+

Back to top
-------------------------------------------------------------------------------

9.9: SQL to determine space used for an index

-------------------------------------------------------------------------------

OK, here's sp_spaceused reduced to bare essentials:
set nocount on
declare @objname varchar(30)
select @objname = "your table"

select index_name = i.name,
i.segment,
rowtotal = rowcnt(i.doampg),
reserved = reserved_pgs(i.id, i.doampg) +
reserved_pgs(i.id, i.ioampg),
data = data_pgs(i.id, i.doampg),
index_size = data_pgs(i.id, i.ioampg),
unused = (reserved_pgs(i.id, i.doampg) +
reserved_pgs(i.id, i.ioampg) -
(data_pgs(i.id, i.doampg) +
data_pgs(i.id, i.ioampg)))
into #space
from sysindexes i
where i.id = object_id(@objname)

You can analyse this in a number of ways:

1. This query should tally with sp_spaceused @objname:
select 'reserved KB' = sum(reserved) * 2,
'Data KB' = sum(data) * 2,
'Index KB' = sum(index_size) * 2,
'Unused KB' = sum(unused) * 2
from #space

2. This one reports space allocation by segment:
select 'segment name' = s.name,
'reserved KB' = sum(reserved) * 2,
'Data KB' = sum(data) * 2,
'Index KB' = sum(index_size) * 2,
'Unused KB' = sum(unused) * 2
from #space t,
syssegments s
where t.segment = s.segment
group by s.name

3. This one reports allocations by index:
select t.index_name,
s.name,
'reserved KB' = reserved * 2,
'Data KB' = data * 2,
'Index KB' = index_size * 2,
'Unused KB' = unused * 2
from #space t,
syssegments s
where t.segment = s.segment

If you leave out the where clause in the initial select into, you can analyse
across the whole database.

Hope this points you in the right direction.

Back to top
-------------------------------------------------------------------------------

9.10: xsybmon

-------------------------------------------------------------------------------

The original site, NSCU, no longer carries these bits. If you feel that it's
useful to have xsybmon and you know where the new bits are, please drop me an
e-mail: do...@midsomer.org

There is an alternative at http://www.neosoft.com/tcl/ftparchive/sorted/
databases/syperf/ that provides some of the functionality and uses Tcl/Tk.

Back to top
-------------------------------------------------------------------------------

9.11: sp_dos

-------------------------------------------------------------------------------
/*>>>>>>>>>>>>>>>>>>>>>>>>>>> sp_dos <<<<<<<<<<<<<<<<<<<<<<<<<<<<<*/
IF OBJECT_ID('dbo.sp_dos') IS NOT NULL
DROP PROCEDURE sp_dos
go

CREATE PROCEDURE sp_dos
@vcObjectName varchar(30) = NULL
AS
/***********************************************************************
* sp_dos - Display Object Scope
* This procedure graphically displays the scope of a object in
* the database.
*
* Copyright 1996, all rights reserved.
*
* Author: David W. Pledger, Strategic Data Systems, Inc.
*
* Parameters
* ----------------------------------------------------------------
* Name In/Out Description
* ----------------------------------------------------------------
* @vcObjectName In Mandatory - The exact name of a single
* database object for which the call
* hierarchy is to be extracted.
*
* Selected Data
* A sample report follows:
* ----------------------------------------------------------------
*
* SCOPE OF EFFECT FOR OBJECT: ti_users
* +------------------------------------------------------------------+
* (T) ti_users (Trigger on table 'users')
* |
* +--(P) pUT_GetError
* | |
* | +--(U) ui_error
* |
* +--(U) BGRP
* |
* +--(U) user_information (See Triggers: tu_user_information)
* |
* +--(U) users (See Triggers: ti_users, tu_users, td_users)
* |
* +--(P) pUT_LUDVersion
* |
* +--(P) pUT_GetError
* | |
* | +--(U) ui_error
* |
* +--(U) BGRP_LUDVersion
*
* <End of Sample>
*
* Return Values
* ----------------------------------------------------------------
* Value Description
* ----------------------------------------------------------------
* < -99 Unexpected error - should never occur.
*
* -99 to -1 Sybase **reserved** return status values.
*
* 0 Execution succeeded
*
* 1 Execution of this procedure failed.
*
* > 1 Unexpected error - should never occur.
*
***********************************************************************/
BEGIN

/*------------------- Local Declarations -------------------------*/
DECLARE @iObjectID int /* System ID of object */
DECLARE @cObjectType char(1) /* System Object Type code */
DECLARE @vcName varchar(30) /* System Object name */
DECLARE @vcMsg varchar(255) /* Error Message if needed */
DECLARE @iInsTrigID int /* Insert Trigger ID */
DECLARE @iUpdTrigID int /* Update Trigger ID */
DECLARE @iDelTrigID int /* Delete Trigger ID */
DECLARE @vcErrMsg varchar(255) /* Error Message */

/* Local variables to facilitate descending the parent-child
** object hierarchy.
*/
DECLARE @iCurrent int /* Current node in the tree */
DECLARE @iRoot int /* The root node in the tree */
DECLARE @iLevel int /* The current level */

/* Local variables that contain the fragments of the text to
** be displayed while descending the hierarchy.
*/
DECLARE @iDotIndex int /* Index for locating periods */
DECLARE @cConnector char(3) /* '+--' */
DECLARE @cSibSpacer char(3) /* '| ' */
DECLARE @cBar char(1) /* '|' */
DECLARE @cSpacer char(3) /* ' ' */
DECLARE @cPrntStrng1 char(255) /* The first string to print */
DECLARE @cPrntStrng2 char(255) /* The second string to print */
DECLARE @iLoop int /* Temp var used for loop */
DECLARE @vcDepends varchar(255) /* Dependency String */
DECLARE @iDependsItem int /* Index to a string item */

/* Create a temporary table to handle the hierarchical
** decomposition of the task parent-child relationship. The Stack
** table keeps track of where we are while the leaf table keeps
** track of the leaf tasks which need to be performed.
*/
CREATE TABLE #Stack
(iItem int,
iLevel int)

/*------------------- Validate Input Parameters --------------------*/
/* Make sure the table is local to the current database. */
IF (@vcObjectName LIKE "%.%.%") AND (SUBSTRING(@vcObjectName, 1,
CHARINDEX(".", @vcObjectName) - 1) != DB_NAME())
GOTO ErrorNotLocal

/* Now check to see that the object is in sysobjects. */
IF OBJECT_ID(@vcObjectName) IS NULL
GOTO ErrorNotFound

/* ---------------------- Initialization -------------------------*/

/* Do print any rowcounts while this is in progress. */
SET NOCOUNT ON

/* Retrieve the object ID out of sysobjects */
SELECT @iObjectID = O.id,
@cObjectType = O.type
FROM sysobjects O
WHERE O.name = @vcObjectName

/* Make sure a job exists. */
IF NOT (@@rowcount = 1 and @@error = 0 and @iObjectID > 0)
GOTO ErrorNotFound

/* Initialize the print string pieces. */
SELECT @cConnector = "+--",
@cSibSpacer = "|..",
@cBar = "|",
@cSpacer = "...",
@cPrntStrng1 = "",
@cPrntStrng2 = ""

/* Print a separator line. */
PRINT " "
PRINT "** Utility by David Pledger, Strategic Data Systems, Inc. **"
PRINT "** PO Box 498, Springboro, OH 45066 **"
PRINT " "
PRINT " SCOPE OF EFFECT FOR OBJECT: %1!",@vcObjectName
PRINT "+------------------------------------------------------------------+"

/* -------------------- Show the Hierarchy -----------------------*/
/* Find the root task for this job. The root task is the only task
** that has a parent task ID of null.
*/
SELECT @iRoot = @iObjectID

/* Since there is a root task, we can assign the first
** stack value and assign it a level of one.
*/
SELECT @iCurrent = @iRoot,
@iLevel = 1

/* Prime the stack with the root level. */
INSERT INTO #Stack values (@iCurrent, 1)

/* As long as there are nodes which have not been visited
** within the tree, the level will be > 0. Continue until all
** nodes are visited. This outer loop descends the tree through
** the parent-child relationship of the nodes.
*/
WHILE (@iLevel > 0)
BEGIN

/* Do any nodes exist at the current level? If yes, process them.
** If no, then back out to the previous level.
*/
IF EXISTS
(SELECT *
FROM #Stack S
WHERE S.iLevel = @iLevel)
BEGIN

/* Get the smallest numbered node at the current level. */
SELECT @iCurrent = min(S.iItem)
FROM #Stack S
WHERE S.iLevel = @iLevel

/* Get the name and type of this node. */
SELECT @cObjectType = O.type,
@vcName = O.name,
@iInsTrigID = ISNULL(O.instrig, 0),
@iUpdTrigID = ISNULL(O.updtrig, 0),
@iDelTrigID = ISNULL(O.deltrig, 0)
FROM sysobjects O
WHERE O.id = @iCurrent

/*
* *=================================================* *
* * Print out data for this node. (Consider * *
* * making this a separate procedure.) * *
* *=================================================* *
*/

/* Initialize the print strings to empty (different from NULL).
** @cPrntStrng1 is used to 'double space' the output and
** contains the necessary column connectors, but no data.
** @cPrntStrng2 contains the actual data at the end of the
** string.
*/
SELECT @cPrntStrng1 = ""
SELECT @cPrntStrng2 = ""

/* Level 1 is the root node level. All Jobs have a single
** root task. All other tasks are subordinate to this task.
** No job may have more than one root task.
*/
IF @iLevel = 1
BEGIN
/* Print data for the root node. */
SELECT @cPrntStrng1 = "",
@cPrntStrng2 = "(" + @cObjectType + ") " + @vcName
END
ELSE /* Else part of (IF @iLevel = 1) */
BEGIN

/* Initialize loop variable to 2 since level one has
** already been processed for printing.
*/
SELECT @iLoop = 2

/* Look at the values on the stack at each level to
** determine which symbol should be inserted into the
** print string.
*/
WHILE @iLoop <= @iLevel
BEGIN

/* While the loop variable is less than the current
** level, add the appropriate spacer to line up
** the printed output.
*/
IF @iLoop < @iLevel
BEGIN

/* Is there a sibling (another node which exists
** at the same level) on the stack? If so, use
** one type of separator; otherwise, use another
** type of separator.
*/
IF EXISTS(SELECT * FROM #Stack WHERE iLevel = @iLoop)
BEGIN
SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) +
@cSibSpacer
SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) +
@cSibSpacer
END
ELSE
BEGIN
SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) + @cSpacer
SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) + @cSpacer
END
END
ELSE /* Else part of (IF @iLoop < @iLevel) */
BEGIN
SELECT @cPrntStrng1 = rtrim(@cPrntStrng1) + @cBar
SELECT @cPrntStrng2 = rtrim(@cPrntStrng2) +
@cConnector + "(" + @cObjectType + ") " +
@vcName
END

/* Increment the loop variable */
SELECT @iLoop = @iLoop + 1

END /* While @iLoop <= @iLevel */
END /* IF @iLevel = 1 */

/* Spaces are inserted into the string to separate the levels
** into columns in the printed output. Spaces, however, caused
** a number of problems when attempting to concatenate the
** two strings together. To perform the concatenation, the
** function rtrim was used to remove the end of the string.
** This also removed the spaces we just added. To aleviate
** this problem, we used a period (.) wherever there was
** supposed to be a space. Now that we are ready to print
** the line of text, we need to substitute real spaces
** wherever there is a period in the string. To do this,
** we simply look for periods and substitute spaces. This
** has to be done in a loop since there is no mechanism to
** make this substitution in the whole string at once.
*/

/* Find the first period. */
SELECT @iDotIndex = charindex (".", @cPrntStrng1)

/* If a period exists, substitute a space for it and then
** find the next period.
*/
WHILE @iDotIndex > 0
BEGIN
/* Substitute the space */
SELECT @cPrntStrng1 = stuff(@cPrntStrng1, @iDotIndex, 1, " ")

/* Find the next. */
SELECT @iDotIndex = charindex (".", @cPrntStrng1)
END

/* Do the same thing for the second print string. */
SELECT @iDotIndex = charindex (".", @cPrntStrng2)
WHILE @iDotIndex > 0
BEGIN
SELECT @cPrntStrng2 = stuff(@cPrntStrng2, @iDotIndex, 1, " ")
SELECT @iDotIndex = charindex (".", @cPrntStrng2)
END

SELECT @vcDepends = NULL

IF @iInsTrigID > 0
SELECT @vcDepends = OBJECT_NAME(@iInsTrigID) + " (Insert)"

IF @iUpdTrigID > 0
IF @vcDepends IS NULL
SELECT @vcDepends = OBJECT_NAME(@iUpdTrigID) + " (Update)"
ELSE
SELECT @vcDepends = @vcDepends + ", " +
OBJECT_NAME(@iUpdTrigID) + " (Update)"

IF @iDelTrigID > 0
IF @vcDepends IS NULL
SELECT @vcDepends = OBJECT_NAME(@iDelTrigID) + " (Delete)"
ELSE
SELECT @vcDepends = @vcDepends + ", " +
OBJECT_NAME(@iDelTrigID) + " (Delete)"

IF @vcDepends IS NOT NULL
IF @cObjectType = "T"
SELECT @cPrntStrng2 = @cPrntStrng2 +
" (Trigger on table '" + @vcDepends + "')"
ELSE
SELECT @cPrntStrng2 = @cPrntStrng2 +
" (See Triggers: " + @vcDepends + ")"

/* Remove trailing blanks from the first print string. */
SELECT @cPrntStrng1 = rtrim(@cPrntStrng1)
SELECT @cPrntStrng2 = rtrim(@cPrntStrng2)

/* Print the two strings. */
PRINT @cPrntStrng1
PRINT @cPrntStrng2

/* Remove the current entry from the stack (Pop) */
DELETE #Stack
WHERE #Stack.iLevel = @iLevel
AND #Stack.iItem = @iCurrent

/* Add (push) to the stack all the children of the current
** node.
*/
INSERT INTO #Stack
SELECT D.depid,
@iLevel + 1
FROM sysdepends D
WHERE D.id = @iCurrent

/* If any were added, then we must descend another level. */
IF @@rowcount > 0
BEGIN
SELECT @iLevel = @iLevel + 1
END

END
ELSE
BEGIN
/* We have reached a leaf node. Move back to the previous
** level and see what else is left to process.
*/
SELECT @iLevel = @iLevel - 1
END

END /* While (@iLevel > 0) */

PRINT " "

RETURN (0)

/*------------------------ Error Handling --------------------------*/
ErrorNotLocal:
/* 17460, Table must be in the current database. */
EXEC sp_getmessage 17460, @vcErrMsg OUT
PRINT @vcErrMsg
RETURN (1)

ErrorNotFound:
/* 17461, Table is not in this database. */
EXEC sp_getmessage 17461, @vcErrMsg OUT
PRINT @vcErrMsg
PRINT " "

PRINT "Local object types and objecs are:"

SELECT "Object Type" = type,
"Object Name" = name
FROM sysobjects
WHERE type IN ("U","TR","P","V")
ORDER BY type, name

RETURN (1)

END
go

grant execute on sp_dos to public

David Owen

unread,
Jan 24, 2001, 5:14:18 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part18

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

9.17: How to access a SQL Server using Linux

-------------------------------------------------------------------------------

Some time back, Sybase released a binary distribution of ctlib for Linux. This
is just the header and libraries files for ctlib only, not dblib, not isql, not
bcp, not the dataserver and not the OpenServer. This was done as a skunk works
internal project at Sybase, for the good of the Linux community, and not
supported by Sybase in any official capacity. This version of ctlib identifies
itself as 10.0.3.

At the time, the binary format for Linux libraries was a format called a.out.
Since then, the format has changed to the newer, ELF format. ELF libraries and
.o files cannot be linked with a.out libraries and .o files. Fortunately, a.out
libraries and .o files can easily be converted to ELF via the objdump(1)
program.

Getting a useable ctlib for Linux isn't that easy, though. Another
compatibility problem has arisen since these old libraries were compiled. The
byte-order for the ctype macros has changed. One can link to the
(converted-to-ELF) ctlib, but running the resulting executable will result in
an error message having to do with missing localization files. The problem is
that the ctype macros in the compiled ctlib libraries are accessing a structure
in the shared C library which has changed its byte order.

I've converted the a.out library, as distributed by Sybase to ELF, and added
the old tables directly to the library, so that it won't find the wrong ones in
libc.

Using this library, I can link and run programs on my Linux machines against
Sybase databases (It also can run some programs against Microsoft SQL server,
but that's another FAQ). However, you must be running Linux 2.0 or later, or
else the link phase will core dump.

This library is available for ftp at:

*ftp://mudshark.sunquest.com/pub/ctlib-linux-elf/sybperl.tar.gz
*ftp://mudshark.sunquest.com/pub/ctlib-linux-elf/ctlib-linux-elf.tgz

is a compiled version of sybperl 2.0, which is built with the above library.
Obviously, only the ctlib module is in this distribution.

In order to use this code, you will need a Sybase dataserver, a Sybase
interfaces file (in the non-TLI format -- see Q9.16), a user named sybase in
your /etc/passwd file, whose home directory is the root of the distribution,
and some application code to link to.

As far as an isql replacement goes, use sqsh - Q9.12.

One of the libraries in the usual Sybase distribution is a libtcl.a This
conflicts with the library on Linux which implements the TCL scripting
language, so this distribution names that library libsybtcl.a, which might
cause some porting confusion.


The above conflict problem is addressed by SybPerl - Q9.4 and sqsh - Q9.12

More information

See Q11.4.6 for more information on setting up DBI/DBD:Sybase

Back to top
-------------------------------------------------------------------------------

9.18: sp__revroles

-------------------------------------------------------------------------------
/*
* DROP PROC sp__revroles
*/
IF OBJECT_ID('sp__revroles') IS NOT NULL
BEGIN
DROP PROC sp__revroles
PRINT '<<< Dropped proc sp__revroles >>>'
END
go
create procedure sp__revroles
as
/* Created 03/05/97 by Clayton Groom
creates a reverse engineered set of commands to restore user roles
*/
select "exec sp_role grant, " + u.name + ", " + s.name + char(13) +char(10) + "go"
from master..syssrvroless,
sysroles r,
sysusers u
where r.id = s.srid
and r.lrid = u.uid
and s.name <> u.name
go

IF OBJECT_ID('sp__revroles') IS NOT NULL
PRINT '<<< Created proc sp__revroles >>>'
ELSE
PRINT '<<< Failed to create proc sp__revroles >>>'
go

Back to top
-------------------------------------------------------------------------------

9.19: sp__rev_configure

-------------------------------------------------------------------------------
use sybsystemprocs
go
drop procedure sp__rev_configure
go
create procedure sp__rev_configure
as
declare @sptlang int /* current sessions language */
declare @whichone int /* using english or default lang ? */

if @@trancount = 0
begin
set transaction isolation level 1
set chained off
end

select @whichone = 0

select @sptlang = @@langid

if @@langid != 0
begin
if not exists (
select * from master.dbo.sysmessages where error
between 17015 and 17049
and langid = @@langid)
select @sptlang = 0
else
if not exists (
select * from master.dbo.sysmessages where error
between 17100 and 17109
and langid = @@langid)
select @sptlang = 0
end

if @sptlang = 0
begin
select "-- sp_configure settings"
= "sp_configure '" + name + "', "
+ convert( char(12), c.value)
+ char(13) + char(10) + "go"
from master.dbo.spt_values a,
master.dbo.syscurconfigs c
where a.type = "C"
and a.number *= c.config
and a.number >= 0
end
else
begin
select "-- sp_configure settings"
= "sp_configure '" + name + "', "
+ convert(char(12), c.value)
+ char(13) + char(10) + "go"
from master.dbo.spt_values a,
master.dbo.syscurconfigs c,
master.dbo.sysmessages d
where type = "C"
and a.number *= c.config
and a.number >= 0
and msgnum = error and isnull(langid, 0) = @sptlang
end
return (0)
go

Back to top
-------------------------------------------------------------------------------

9.20: sp_servermap

-------------------------------------------------------------------------------
USE sybsystemprocs
go
/*
* DROP PROC dbo.sp_servermap
*/
IF OBJECT_ID('dbo.sp_servermap') IS NOT NULL
BEGIN
DROP PROC dbo.sp_servermap
PRINT '<<< DROPPED PROC dbo.sp_servermap >>>'
END
go

create proc sp_servermap (@selection varchar(10) = "ABCDEF")
as

/* produces 6 "reports" against all possible data in
master..sysdatabases
master..sysdevices
master..sysusages

sp_servermap help
produces a list of the six reports.
A subset of the complete set of reports can be requested by passing
an argument that consists of a string containing the letters of the
desired report.

This procedure was developed on 4.9.1 server. It will run on 4.8
and 10.0 servers, but it has not been verified that the results
produced are correct.
*/

declare @atitle varchar(40),
@btitle varchar(40),
@ctitle varchar(40),
@dtitle varchar(40),
@etitle varchar(40),
@ftitle varchar(40),
@stars varchar(40),
@xstars varchar(40)

set nocount on

select @atitle = "A - DATABASE SEGMENT MAP",
@btitle = "B - DATABASE INFORMATION",
@ctitle = "C - DEVICE ALLOCATION MAP",
@dtitle = "D - DEVICE NUMBER, DEFAULT & SPACE USAGE",
@etitle = "E - DEVICE LOCATION",
@ftitle = "F - MIRRORED DEVICES",
@selection = upper(@selection),
@stars = replicate("*",40)

if @selection = "HELP" begin
print @atitle
print @btitle
print @ctitle
print @dtitle
print @etitle
print @ftitle
print ""
print "select any combination of reports by entering a string of"
print "report letters as the argument to sp_servermap:"
print " sp_servermap acd"
print "will select reports A,C and D."
print "calling sp_servermap with no argument will produce all reports"
return
end

select @@servername, "Current Date/Time" = getdate()
select "Version" = @@version

if charindex("A",@selection) > 0
begin
print ""
print @atitle
select @xstars = substring(@stars,1,datalength(@atitle))
print @xstars

select db=substring(db.name,1,15),db.dbid,
usg.segmap,
segs = substring(" U",sign(usg.segmap/8)+1,1) +
substring(" L",(usg.segmap & 4)/4+1,1) +
substring(" D",(usg.segmap & 2)/2+1,1) +
substring(" S",(usg.segmap & 1)+1,1),
"device fragment"=substring(dev.name,1,15),
"start (pg)" = usg.vstart,"size (MB)" = str(usg.size/512.,7,2)
from master.dbo.sysusages usg,
master.dbo.sysdevices dev,
master.dbo.sysdatabases db
where vstart between low and high
and cntrltype = 0
and db.dbid = usg.dbid
order by db.dbid, usg.lstart

print ""
print"Segment Codes:"
print "U=User-defined segment on this device fragment"
print "L=Database Log may be placed on this device fragment"
print "D=Database objects may be placed on this device fragment by DEFAULT"
print "S=SYSTEM objects may be placed on this device fragment"
print ""
end

if charindex("B",@selection) > 0
begin
print ""
print @btitle
select @xstars = substring(@stars,1,datalength(@btitle))
print @xstars

select db=substring(db.name,1,15),
db.dbid,
"size (MB)" = str(sum(usg.size)/512.,7,2),
"db status codes " = substring(" A",(status & 4)/4+1,1) +
substring(" B",(status & 8)/8+1,1) +
substring(" C",(status & 16)/16+1,1) +
substring(" D",(status & 32)/32+1,1) +
substring(" E",(status & 256)/256+1,1) +
substring(" F",(status & 512)/512+1,1) +
substring(" G",(status & 1024)/1024+1,1) +
substring(" H",(status & 2048)/2048+1,1) +
substring(" I",(status & 4096)/4096+1,1) +
substring(" J",(status & 16384)/16384+1,1) +
substring(" K",(status & 64)/64+1,1) +
substring(" L",(status & 128)/128+1,1) +
substring(" M",(status2 & 1)/1+1,1) +
substring(" N",(status2 & 2)/2+1,1) +
substring(" O",(status2 & 4)/4+1,1) +
substring(" P",(status2 & 8)/8+1,1) +
substring(" Q",(status2 & 16)/16+1,1) +
substring(" R",(status2 & 32)/32+1,1),
"created" = convert(char(9),crdate,6) + " " +
convert(char(5),crdate,8),
"dump tran" = convert(char(9),dumptrdate,6) + " " +
convert(char(5),dumptrdate,8)

from master.dbo.sysdatabases db,
master.dbo.sysusages usg

where db.dbid =usg.dbid
group by db.dbid
order by db.dbid

print ""
print "Status Code Key"
print ""
print "Code Status"
print "---- ----------------------------------"
print " A select into/bulk copy allowed"
print " B truncate log on checkpoint"
print " C no checkpoint on recovery"
print " D db in load-from-dump mode"
print " E db is suspect"
print " F ddl in tran"
print " G db is read-only"
print " H db is for dbo use only"
print " I db in single-user mode"
print " J db name has been changed"
print " K db is in recovery"
print " L db has bypass recovery set"
print " M abort tran on log full"
print " N no free space accounting"
print " O auto identity"
print " P identity in nonunique index"
print " Q db is offline"
print " R db is offline until recovery completes"
print ""
end

if charindex("C",@selection) > 0
begin
print ""
print @ctitle
select @xstars = substring(@stars,1,datalength(@ctitle))
print @xstars

select "device fragment"=substring(dev.name,1,15),
"start (pg)" = usg.vstart,"size (MB)" = str(usg.size/512.,7,2),
db=substring(db.name,1,15),
lstart,
segs = substring(" U",sign(usg.segmap/8)+1,1) +
substring(" L",(usg.segmap & 4)/4+1,1) +
substring(" D",(usg.segmap & 2)/2+1,1) +
substring(" S",(usg.segmap & 1)+1,1)
from master.dbo.sysusages usg,
master.dbo.sysdevices dev,
master.dbo.sysdatabases db
where usg.vstart between dev.low and dev.high
and dev.cntrltype = 0
and db.dbid = usg.dbid
group by dev.name, usg.vstart, db.name
having db.dbid = usg.dbid
order by dev.name, usg.vstart


print ""
print "Segment Codes:"
print "U=USER-definedsegment on this device fragment"
print "L=Database LOG may be placed on this device fragment"
print "D=Database objects may be placed on this device fragment by DEFAULT"
print "S=SYSTEM objects may be placed on this device fragment"
print ""
end

if charindex("D",@selection) > 0
begin
print ""
print @dtitle
select @xstars = substring(@stars,1,datalength(@dtitle))
print @xstars

declare @vsize int
select @vsize = low

from master.dbo.spt_values
where type="E"

and number = 3

select device = substring(name,1,15),
vdevno = convert(tinyint,substring(convert(binary(4),low),@vsize,1)),
"default disk?" = " " + substring("NY",(status & 1)+1,1),
"total (MB)" = str(round((high-low)/512.,2),7,2),
used = str(round(isnull(sum(size),0)/512.,2),7,2),
free = str(round(abs((high-low-isnull(sum(size),0))/512.),2),7,2)
from master.dbo.sysusages,
master.dbo.sysdevices
where vstart between low and high
and cntrltype=0
group by all name
having cntrltype=0
order by vdevno
end

if charindex("E",@selection) > 0
begin
print ""
print @etitle
select @xstars = substring(@stars,1,datalength(@etitle))
print @xstars

select device = substring(name,1,15),
location = substring(phyname,1,60)
from master.dbo.sysdevices
where cntrltype=0
end

if charindex("F",@selection) > 0
begin
if exists (select 1
from master.dbo.sysdevices
where status & 64 = 64)
begin

print ""
print @ftitle
select @xstars = substring(@stars,1,datalength(@ftitle))
print @xstars

select device = substring(name,1,15),
pri =" " + substring("* **",(status/256)+1,1),
sec = " " + substring(" ***",(status/256)+1,1),
serial = " " + substring(" *",(status & 32)/32+1,1),
"mirror" = substring(mirrorname,1,35),
reads = " " + substring(" *",(status & 128)/128+1,1)
from master.dbo.sysdevices
where cntrltype=0
and status & 64 = 64
end
else
begin
print ""
print "NO DEVICES ARE MIRRORED"
end
end

set nocount off

go
IF OBJECT_ID('dbo.sp_servermap') IS NOT NULL
BEGIN
PRINT '<<< CREATED PROC dbo.sp_servermap >>>'
grant execute on dbo.sp_servermap to sa_role
END
ELSE
PRINT '<<< FAILED CREATING PROC dbo.sp_servermap >>>'
go

Back to top
-------------------------------------------------------------------------------

9.21: sp__create_crosstab

-------------------------------------------------------------------------------
use sybsystemprocs
go

CREATE PROCEDURE sp__create_crosstab
@code_table varchar(30) -- table containing code lookup rows
,@code_key_col varchar(30) -- name of code/lookup ID column
,@code_desc_col varchar(30) -- name of code/lookup descriptive text column
,@value_table varchar(30) -- name of table containing detail rows
,@value_col varchar(30) -- name of value column in detail table
,@value_group_by varchar(30) -- value table column to group by.
,@value_aggregate varchar(5) -- operator to apply to value being aggregated

AS
/*
Copyright (c) 1997, Clayton Groom. All rights reserved.
Procedure to generate a cross tab query script
Reqires:
1. A lookup table with a code/id column and/or descriptive text column
2. A data table with a foreign key from the lookup table & a data value to aggregate
3. column(s) name from data table to group by
4. Name of an aggregate function to perform on the data value column.
*/

set nocount on

if sign(charindex(upper(@value_aggregate), 'MAX MIN AVG SUM COUNT')) = 0
BEGIN
print "@value_aggregate value is not a valid aggregate function"
-- return -1
END

declare @value_col_type varchar(12) -- find out data type for aggregated column.
,@value_col_len int -- get length of the value column
,@str_eval_char varchar(255)
,@str_eval_int varchar(255)
-- constants
,@IS_CHAR varchar(100) -- character data types
,@IS_NOT_ALLOWED varchar(100) -- data types not allowed
,@IS_NUMERIC varchar(255) -- numeric data type names
,@NL char(2) -- new line
,@QUOTE char(1) -- ascii character 34 '"'
--test variables
,@value_col_is_char tinyint -- 1 = string data type, 0 = numeric or not allowed
,@value_col_is_ok tinyint -- 1 = string or numeric type, 0 = type cannot be used.
,@value_col_is_num tinyint -- 1 = numeric data type, 0 = string or not allowed

select @IS_CHAR = 'varchar char nchar nvarchar text sysname'
,@IS_NOT_ALLOWED= 'binary bit varbinary smalldatetime datetime datetimn image timestamp'
,@IS_NUMERIC = 'decimal decimaln float floatn int intn money moneyn numeric numericn real smallint smallmoney tinyint'
,@NL = char(13) + char(10)
,@QUOTE = '"' -- ascii 34

-- get the base data type & length of the value column. Is it a numeric type or a string type?
-- need to know this to use string or numeric functions in the generated select statement.
select @value_col_type = st.name
,@value_col_len = sc.length
from syscolumns sc
,systypes st
where sc.id = object_id(@value_table)
and sc.name = @value_col
and sc.type = st.type
and st.usertype = (select min(usertype)
from systypes st2
where st2.type = sc.type)
--select @value_col_type, @value_col_len

select @value_col_is_char = sign(charindex( @value_col_type, @IS_CHAR))
,@value_col_is_ok = 1 - sign(charindex( @value_col_type, @IS_NOT_ALLOWED))
,@value_col_is_num = sign(charindex( @value_col_type, @IS_NUMERIC))

IF @value_col_is_ok = 1
BEGIN
if @value_col_is_char = 1
begin
select @str_eval_char = ''
end
else
if @value_col_is_num = 1
begin
select @str_eval_char = ''
end
else
begin
print " @value_col data type unnown. must be string or numeric"
-- return -1
end
END
ELSE --ERROR
BEGIN
print " @value_col data type not allowed. must be string or numeric"
-- return -1
END

-- template. first level expansion query.
-- result must be executed to generate final output query.

SELECT "select 'select " + @value_group_by + "'"
IF @value_col_is_char = 1
BEGIN
SELECT "select '," + @QUOTE + "' + convert(varchar(40), " + @code_desc_col+ " ) + '" + @QUOTE + @NL
+ " = "
+ @value_aggregate
+ "(isnull( substring("
+ @value_col
+ ", 1, ( "
+ convert(varchar(3), @value_col_len )
+ " * charindex( "
+ @QUOTE
+ "'+"
+ @code_key_col
+ "+'"
+ @QUOTE
+ ", "
+ @code_key_col
+ " ))), "
+ @QUOTE + @QUOTE
+ "))'"
END
ELSE IF @value_col_is_num = 1
BEGIN
SELECT "select '," + @QUOTE + "' + convert(varchar(40), " + @code_desc_col+ " ) + '" + @QUOTE + @NL
+ " = "
+ @value_aggregate
+ "("
+ @value_col
+ " * charindex( "
+ @QUOTE
+ "'+"
+ @code_key_col
+ "+'"
+ @QUOTE
+ ", "
+ @code_key_col
+ "))'"
END
SELECT "from " + @code_table + @NL
+ "select 'from " + @value_table + "'" + @NL
+ "select 'group by " + @value_group_by + "'"

-- end
go

Back to top
-------------------------------------------------------------------------------

9.22: upd_stats.csh

-------------------------------------------------------------------------------
#!/bin/csh
# ########################################################################
# #
# # SCCS Keyword Header
# # -------------------
# #
# # Module Name : update_stats.csh
# # Version : 1.8
# # Last Modified: 2/16/98 at 17:19:38
# # Extracted : 2/16/98 at 17:19:39
# # Archived as : <host>:/u/sybase/SCCS/s.update_stats.csh
# #
# ########################################################################

# upd_stats.csh
# ------------------
#
# Shell to update the distribution pages for each table in a database.
#
# Requires sqlsa (script w/ the proper isql login for dbo of a database)
# ex:
# #!/bin/csh -f
# isql -U<dbusr> -P<dbpw> -S<dbsvr> -w265 $*
# exit($$status)
#
# Author: FJ Lundy, 2/96

ARGS:
set progname = `basename $0`
if ($#argv != 2) then
goto USAGE
endif
set dbdb = $1
set parallel_jobs = $2

INIT:
# Declare intermediate files
set filebase = /tmp/$progname:r.-D$dbdb
set cmdfile = $filebase.sql
set awkfile = $filebase.awk
set tblfile = $filebase.tbl
set workflag = $filebase.working
set logfile = $filebase.log
set runningflag = $filebase.running

# Check for another running copy of this process
if ( -f $runningflag ) goto ERROR

# Set the running flag to prevent multiple copies of
onintr DONE

# Clean up from previous runs
rm -f $filebase.* >& /dev/null

# Set the "running flag" (this step must FOLLOW the "clean-up from previous
# runs" step!
touch $runningflag

# Which OS are we running on?
set os = `uname`
switch ($os)
case "IRIX":
case "IRIX64":
case "HP-UX":
set splitFlag = "-l"
breaksw
case "SunOS":
set splitFlag = "-"
breaksw
default:
echo "ERROR: $progname- Unsupported Os($os). Aborting"
exit(-1)
endsw


MAIN:
# Start the Log
rm -f $logfile
echo "$0 $*" > $logfile
echo "NOTE: $progname- (`date`) BEGIN $progname" >> $logfile


# Create the awk command file.
cat << *EOF* > $awkfile
\$0 !~ /^\$/ {
tblname = \$1
printf("declare @msg varchar(255), @dt_start datetime, @dt_end datetime\n")
printf("select @msg = \"Updating Statistics for: Db(%s)\"\n", "$dbdb")
printf("print @msg\n")
printf("select @dt_start = getdate()\n")
printf("update statistics %s\n", tblname)
printf("exec sp_recompile '%s'\n", tblname)
printf("select @dt_end = getdate()\n")
printf("select @msg = \"Table(%s)\"\n", tblname)
printf("print @msg\n")
printf("select @msg = \"\tstart(\" + convert(varchar, @dt_start) + \")\"\n")
printf("print @msg\n")
printf("select @msg = \"\t end(\" + convert(varchar, @dt_end) + \")\"\n")
printf("print @msg\n")
printf("print \"\"\n")
printf("go\n\n")
}
*EOF*


# Create a list of tables to update the stats for
sqlsa << *EOF* | tail +3 | sed 's/^[ ]*//g' | cut -f1 -d\ > $tblfile
set nocount on
use $dbdb
go
select u.name + "." + o.name "Table",
sum((reserved_pgs(i.id, i.doampg) + reserved_pgs(i.id, i.ioampg)) * 2) "Kb"
from sysindexes i, sysobjects o, sysusers u
where (o.id = i.id) and (o.uid = u.uid) and (o.type = "U" or o.type = "S")
group by u.name, o.name
order by Kb desc
go
*EOF*


# Split the files into equal-sized chunks based on the passed
# parameter for the number of parallelized jobs
@ ct = 0
foreach tbl (`cat $tblfile`)
@ i = $ct % $parallel_jobs
echo "$tbl" >> $tblfile.$i
@ ct = $ct + 1
end


# For each of the created table lists:
# 1) create TSQL, 2) set a work flag 3) background the job
@ i = 0
set all_work_flags = ""
foreach file ( $tblfile.* )
# Create the T-SQL command file
@ i = $i + 1
echo "set nocount on" > $cmdfile.$i
echo "use $dbdb" >> $cmdfile.$i
echo "go" >> $cmdfile.$i
awk -f $awkfile $file >> $cmdfile.$i

# Spawn a subshell and remove the working flag when done
# Log output to a log file commonto all threads. This can possibly cause
# lost information in the log file if all the threads come crashing in
# at once. Oh well...
set all_work_flags = ( $all_work_flags $workflag.$i )
touch $workflag.$i
(sqlsa < $cmdfile.$i >>& $logfile ; rm -f $workflag.$i) &
end


# Loop until all of the spawned processes are finished (as indicated by the
# absence of working flags
while ( 1 )
set num_working = `ls $workflag.* | wc -l`
if ( $num_working == 0 ) break
sleep 10
end # end-while: wait for work to finish

DONE:
rm $awkfile $cmdfile.* $tblfile $tblfile.*
rm $runningflag
echo "NOTE: $progname- (`date`) END $progname" >> $logfile
cat $logfile
exit(0)

USAGE:
echo ""
echo "USAGE : $progname <db> <# of parallel jobs>"
echo " Updates the distribution pages for each user and system table in"
echo " the specified database."
echo "REQUIRES: sqlsa"
echo ""
exit(-1)

ERROR:
echo ""
echo "ERROR: $progname- This process is already running for $dbdb. Aborting"
echo ""
exit(-2)

# *EOF*

Back to top
-------------------------------------------------------------------------------

9.23: NTQuery.exe

-------------------------------------------------------------------------------

Brief

ntquery.exe is a 32-bit application allowing a lightweight, but robust sybase
access environment for win95/NT. It has a split window - the top for queries,
the bottom for results and error/message handler responses, which are processed
in-line. Think of it as isql for windows - a better (reliable) version of wisql
(with sensible error handling). Because its simple it can be used against
rep-server (I've also used it against Navigation Server(R.I.P.))

Requirements: open client/dblib (Tested with 10.x up to 11.1.1)

It picks up the server list from %SYBASE%\ini\sql.ini and you can add
DSQUERY,SYBUSER and SYBPASS variables in your user variables to set default
server,username and password values.

Instructions

To connect: SQL->CONNECT (only one connection at a time, but you can run
multiple ntquery copies) Enter query in top window and hit F3 (or SQL->Execute
Query if you must use the mouse) Results/Messages/Errors appear in bottom
window

A script can be loaded into the top window via File->Open Either sql or results
can be saved with File->Save - it depends which window your focus is on.

Theres a buffer limit of 2mb

Get it here

ntquery.zip [22K]

Back to top
-------------------------------------------------------------------------------

9.24: Sybase on Linux FAQ

-------------------------------------------------------------------------------

Sybase have released two versions of Sybase on Linux, 11.0.3.3 and 11.9.2.

11.9.2

This is officially supported and sanctioned. The supported version can be
purchased from Sybase at similar, if not exactly the same, conditions as 11.9.2
on NT. The 11.9.2.2 release is imminent.

11.0.3.3


Please remember that Sybase Inc does not provide any official support for
SQL Server on Linux (ie the 11.0.3.3 release). The folks on the 'net
provide the support.

Index

*Minimum Requirements
*How to report a bug
*Bug list

Minimum Requirements

*Linux release: 2.0.36 or 2.1.122 or greater.

How to report a bug

I hope you understand that the Sybase employee who did the port is a very busy
person so it's best not to send him mail regarding trivial issues. If you have
tried posting to comp.databases.sybase and ase-lin...@isug.com and have
checked the bugs list, send him an e-mail note with the following data - you
will not get an acknowledgement to your e-mail and it will go directly into the
bug tracking database; true bugs will be fixed in the next release; any message
without the above Subject will be deleted, unseen, by a filter.


Administrator: I know that the above sounds harsh but Wim ten has been
launched to world-wide exposure. In order for him to continue to provide
Sybase ASE outside of his normal workload we all have to support him.
Thanks!

With the above out of the way, if you find a bug or an issue please report it
as follows:


To: wten...@sybase.com
Subject: SYBASE ASE LINUX PR
uname: the result of typing 'uname -a' in a shell
$SYBASE/scripts/hw_info.sh: As 'sybase' run this shell script and enclose
its output
short description: a one to two line description of the problem
repeatable: yes, you can repeat it, no you cannot
version of dataserver: the result of: as the 'sybase' user, 'cd $SYBASE/bin
' and type './dataserver -v|head -1'
test case: test case to reproduce the problem

Bug List
+-------------+--------+--------------+-------------+-------------+-----------+
+-------------+--------+--------------+-------------+-------------+-----------+
| Short | Fixed? | Dataserver | Date | Fix Date | Fix Notes |
| Description | | Release | Reported | | |
+-------------+--------+--------------+-------------+-------------+-----------+
| Remote | Yes | SQL Server/ | Pre-release | Pre-release | You must |
| connections | | 11.0.3.3/P/ | of SQL | of SQL | upgrade |
| hang | | Linux Intel/ | Server | Server | your OS |
| | | Linux 2.0.36 | | | to either |
| | | i586/1/OPT/ | | | 2.0.36 or |
| | | Thu Sep 10 | | | 2.1.122 |
| | | 13:42:44 | | | or |
| | | CEST 1998 | | | greater |
+-------------+--------+--------------+-------------+-------------+-----------+

as of Fri Nov 20 20:16 (08:16:47 PM) MST 1998

Back to top
-------------------------------------------------------------------------------

9.25: Linux Shared Memory for ASE (x86 Processors)

-------------------------------------------------------------------------------

2.2.x Series Kernels and Above

To set the maximum shared memory to 128M use the following:

# echo 134217728 > /proc/sys/kernel/shmmax

This comes from the following calculation: 128Mb = 128 x 1024 x 1024 bytes =
134217728 bytes

2.0.x and 2.1.x Kernels

To increase the total memory for ASE (SQL Server) beyond 32mb, several kernel
parameters must be changed.

1. Determine Memory/System Requirements
+a: Total Memory < 128mb specific instructions
+b: Total Memory > 128MB - specific instructions

2. Modify the linux/include/asm/shmparam.h to setup shared memory
3. Increase the size of the swap
4. Recompile your kernel & start using the new kernel
5. Verify the changes have taken effect
6. Increase the total memory to the desired size

Comments
-------------------------------------------------------------------------------

1a - Total Memory < 128mb specific instructions

-------------------------------------------------------------------------------

Requirements:

Linux 2.0.36 or higher

Total memory is currently limited to 128mb. A request to the Linux kernel
developers has been made to enable large swap support which will allow the same
size as 2.2.x kernels.
-------------------------------------------------------------------------------

1b - Total Memory > 128mb - specific instructions

-------------------------------------------------------------------------------

Requirements:

*Linux Kernel 2.2.x or higher *
*util-linux package 2.9 or higher *
*Swap space atleast as large as the SQL Server


* - both are available from ftp://ftp.us.kernel.org

You need to make the following changes in linux/include/asm-i386/page.h:

- #define __PAGE_OFFSET (0xC0000000)
+ #define __PAGE_OFFSET (0x80000000)

This allows accessing up to 2gb of memory. Default is 960mb.
-------------------------------------------------------------------------------

Step 2: Modify the linux/include/asm/shmparam.h to setup shared memory

-------------------------------------------------------------------------------


[max seg size]
- #define SHMMAX 0x2000000 /* defaults to 32 MByte */
+ #define SHMMAX 0x7FFFE000 /* 2048mb - 8k */

[max number of segments]
- #define _SHM_ID_BITS 7 /* maximum of 128 segments */
+ #define _SHM_ID_BITS 5 /* maximum of 32 segments */

[number of bits to count how many pages in the shm segment]
- #define _SHM_IDX_BITS 15 /* maximum 32768 pages/segment */
+ #define _SHM_IDX_BITS 19 /* maximum 524288 pages/segment */

Alter _SHM_IDX_BITS only if you like to go beyond the default 128MByte
where you also need the swap space available.

_SHM_ID_BITS + _SHM_IDX_BITS must be equal to or less then 24.

Linux kernel PAGE size for Intel x86 machines = 4k

-------------------------------------------------------------------------------

Step 3: To increase the size of swap

-------------------------------------------------------------------------------


$ mkswap -c <device> [size] <- use for pre 2.2 kernels
- limited to 128mb - 8k

$ mkswap -c -v1 <device> [size] <- limited to 2gb 8k

$ swapon <device>
*Add the following to your /etc/fstab to enable this swap on boot

<device> swap swap defaults 0 0

-------------------------------------------------------------------------------

Step 4: Recompile your kernel & restart using the new kernel

-------------------------------------------------------------------------------


Follow the instructions provided with the Linux Kernel

-------------------------------------------------------------------------------

Step 5: Verify the changes have taken effect

-------------------------------------------------------------------------------


$ ipcs -lm

------ Shared Memory Limits --------
max number of segments = 32
max seg size (kbytes) = 2097144
max total shared memory (kbytes) = 67108864
min seg size (bytes) = 1

[jfroebe@jfroebe-desktop asm]$

The changes took.

-------------------------------------------------------------------------------

Step 6: Increase the total memory to the desired size

-------------------------------------------------------------------------------


Because of current limitations in the GNU C Library (glibc), the SQL Server
is limited to 893mb. A workaround to increase this to 1400mb has been
submitted.

Increase the total memory to desired size. Remember the above limitation as
well as the 128mb limitation on Linux kernel 2.0.36.

For example, to increase the total memory to 500mb:

1> sp_configure "total memory", 256000


2> go
1> shutdown
2> go

-------------------------------------------------------------------------------

Comments


* Note that it is possible to increase the total memory far above the physical
RAM

Back to top
-------------------------------------------------------------------------------

9.26: sp_spaceused_table

-------------------------------------------------------------------------------

Brief

In environment where there are a lot of temporary tables #x being created, how
do you tell who is using how much space ?

This is a problem because the object names are munged in the tempdb. I solved
this problem by creating another procedure from sp_spaceused which used the
object_id as its parameter instead of the name.

The ksh script which runs through the object ids and the procedure
sp_spaceused_table (modified sp_spaceused ) are attached.

sp_spaceused_table

use sybsystemprocs
go

create procedure sp_spaceused_table
@object_id int
as
declare @type smallint, -- the object type
@msg varchar(250), -- message output
@dbname varchar(30), -- database name
@tabname varchar(30), -- table name
@length int,
@objname varchar(92), -- the object we want size on
@list_indices int -- don't sum all indices, list each


select @objname = NULL, @list_indices = 0

if @@trancount = 0
begin
set chained off
end


set transaction isolation level 1

if not exists (select * from sysobjects where id = @object_id and type = "U")
begin
print "The table does not exists in the current database."
return (1)
end

set nocount on

/*
** We want a particular object.
*/
begin
select name = o.name,
iname = i.name,
low = d.low,
rowtotal = rowcnt(i.doampg),
reserved = convert(numeric(20,9),
(reserved_pgs(i.id, i.doampg) +
reserved_pgs(i.id, i.ioampg))),
data = convert(numeric(20,9),data_pgs(i.id, i.doampg)),
index_size = convert(numeric(20,9),
data_pgs(i.id, i.ioampg)),
unused = convert(numeric(20,9),
((reserved_pgs(i.id, i.doampg) +
reserved_pgs(i.id, i.ioampg)) -
(data_pgs(i.id, i.doampg) +
data_pgs(i.id, i.ioampg))))
into #pagecounts
from sysobjects o, sysindexes i, master.dbo.spt_values d
where i.id = @object_id
and o.id = @object_id
and d.number = 1
and d.type = "E"

if (@list_indices = 1)
begin
select @length = max(datalength(iname))
from #pagecounts
if (@length > 20)
select index_name = iname,
size = convert(char(10), convert(varchar(11),
convert(numeric(11,0),
index_size / 1024 *
low)) + " " + "KB"),
reserved = convert(char(10),
convert(varchar(11),
convert(numeric(11,0),
reserved / 1024 *
low)) + " " + "KB"),
unused = convert(char(10), convert(varchar(11),
convert(numeric(11,0), unused / 1024 *
low)) + " " + "KB")
from #pagecounts
else
select index_name = convert(char(20), iname),
size = convert(char(10), convert(varchar(11),
convert(numeric(11,0),
index_size / 1024 *
low)) + " " + "KB"),
reserved = convert(char(10),
convert(varchar(11),
convert(numeric(11,0),
reserved / 1024 *
low)) + " " + "KB"),
unused = convert(char(10), convert(varchar(11),
convert(numeric(11,0), unused / 1024 *
low)) + " " + "KB")
from #pagecounts

end

select @length = max(datalength(name))
from #pagecounts

if (@length > 20)
select distinct name,
rowtotal = convert(char(11), sum(rowtotal)),
reserved = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(reserved) *
(low / 1024))) + " " + "KB"),
data = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(data) * (low / 1024)))
+ " " + "KB"),
index_size = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(index_size) *
(low / 1024))) + " " + "KB"),
unused = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(unused) *
(low / 1024))) + " " + "KB")
from #pagecounts
else
select distinct name = convert(char(20), name),
rowtotal = convert(char(11), sum(rowtotal)),
reserved = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(reserved) *
(low / 1024))) + " " + "KB"),
data = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(data) * (low / 1024)))
+ " " + "KB"),
index_size = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(index_size) *
(low / 1024))) + " " + "KB"),
unused = convert(char(15), convert(varchar(11),
convert(numeric(11,0), sum(unused) *
(low / 1024))) + " " + "KB")
from #pagecounts
end
return (0)
go

ksh script

#!/bin/ksh

if [ $# -ne 1 ]
then
echo "usage: $0 <pisql|disql|dbcisql...>"
exit 1
fi

ISQL=$1

TMP=/tmp/$$
$ISQL <<! | egrep "[0-9][0-9][0-9]" > $TMP
use tempdb
go
select id from sysobjects where type = "U"
go
!

for i in `cat $TMP`
do
echo use tempdb
echo go
echo sp_spaceused_table $i
echo go
done | $ISQL -e

rm $TMP

Back to top
-------------------------------------------------------------------------------

9.27: sybdump

-------------------------------------------------------------------------------

Sybdump is a Tcl script written by De Clarke (d...@ucolick.org) for extracting a
database schema. Look in

ftp://ftp.ucolick.org/pub/src/UCODB

for sybdump.tar or sybdump.tar.gz.

David Owen

unread,
Jan 24, 2001, 5:14:18 AM1/24/01
to
Posted-By: auto-faq 3.3.1 beta (Perl 5.005)
Archive-name: databases/sybase-faq/part17

URL: http://www.isug.com/Sybase_FAQ
Version: 1.2
Maintainer: David Owen
Last-modified: 2000/06/07
Posting-Frequency: posted every 3rd month
A how-to-find-the-FAQ article is posted on the intervening months.

9.12: SQSH, Release 1.4


(The current stable release of sqsh is 1.7 and 1.8 is in testing. You might be
better off going straight to Scott's sqsh page for more up to date
information.)
Last Modified: Oct 16, 1996 at 21:24:52 EST
-------------------------------------------------------------------------------

Sybase-FAQ Notice

You are currently reading a special Sybase-FAQified version of my home page. I
will attempt to keep it as up-to-date as possible, however there is a chance
that it may lag somewhat behind my personal page (http://www.voicenet.com/~gray
/sqsh.html). Also, this version has been stripped of changelog and status
information in order to shorten it up a bit for the plain-text version of the
FAQ.

What is SQSH?

Sqsh (pronounced skwish) is short for SQshelL (pronounced s-q-shell), it is
intended as a replacement for the venerable 'isql' program supplied by Sybase.
It came about due to years of frustration of trying to do real work with a
program that was never meant to perform real work.

Sqsh is much more than a nice prompt (a la 'dsql', from David B. Joyner), it is
intended to provide much of the functionality provided by a good shell, such as
variables, redirection, pipes, back-grounding, job control, history, command
completion, and dynamic configuration. Also, as a by-product of the design, it
is remarkably easy to extend and add functionality.

Sqsh was designed with portability in mind and has been successfully compiled
on most major UNIX platforms supported by Sybase, such as HP-UX, AIX, IRIX,
SunOS, Solaris, Dynix, OSF/1, DEC Unix, SCO, NeXT, and CP/M (just kidding). It
has also been compiled on most free versions of UNIX, Linux, NetBSD, and
FreeBSD, using the -DNO_DB flag (which turns off database support). It should
build relatively easily on most POSIX and X/OPEN compliant systems.

Join The SQSH Mailing List

[The SQSH mailing list has moved, so I have taken the liberty of editing this.
Send email to sqsh-users...@onelist.com to join the new home of the
mailing list. Ed.]

Where To Get SQSH

Sqsh may be found on the following sites:

*http://www.voicenet.com/~gray/sqsh-1.7.tar.gz
*ftp://poseidon.csci.unt.edu/pub/sqsh
*ftp://ftp.netcom.com/pub/he/heyjude/gray
*ftp://davox2.davox.com/pub/sqsh

Keep in mind that sometimes the different sites become out of sync, so at times
the latest version may be be available at one of them.

If you are wondering what the funny '.gz' extension is on the end of some of
the files, I highly recommend that you grab a copy of ftp://prep.ai.mit.edu/pub
/gnu/gzip-1.2.4.tar or you can get a regular UNIX compressed version http://
www.voicenet.com/~gray/sqsh-1.7.tar.Z.

I also try to keep around the previous release http://www.voicenet.com/~gray/
sqsh-1.6.tar.gz, just in case I royally screw up the current release (which
could happen).

If you have trouble reaching any of the sites above, you can send me e-mail at
gr...@voicenet.com, I am typically pretty good about responding.

SQSH Features


Commands


Sqsh provides all commands provided by isql (such as go, reset, etc.)-- which
wasn't hard, there aren't many of them--along with a large base of extended
commands. Typically all commands in sqsh are prefixed with a '\' to avoid
collision with the TSQL syntax. For example:
1> \help
Available commands:
\abort \alias \buf-append \buf-copy \buf-edit
\buf-get \buf-load \buf-save \buf-show \connect
\done \echo \exit \go \help
\history \jobs \kill \loop \quit
\read \reconnect \redraw \reset \set
\shell \show \sleep \unalias \wait
\warranty emacs vi
Use '\help command' for more details

However, for those of you that just can't stand the '\', all commands may be
aliased to any other name that you wish via the '\alias' command (see Aliasing,
below).

Variables


Variables are provided in sqsh, much in the same way they are used within a
standard shell. They may be used for storing and retrieving information, both
within a sqsh command as well as within a SQL batch.

For example, lets say that you have a long table name that you don't like to
type over and over again, you can use a variable in place of the table name:
1> \set t="a_really_long_table_name"
1> SELECT "Count" = COUNT(*) FROM $t
2> go

Count
-----------
1123
(1 row affected)

Variables may also be used anywhere within a sqsh command, such as:
1> \set g="go"
1> SELECT "Count" = COUNT(*) FROM $t
2> $g

Count
-----------
1123
(1 row affected)

And, since virtually every aspect of sqsh is configurable through variables,
the \set command may also be used to adjust the behavior of sqsh without having
to exit and re-run with a different command line argument (like isql):
1> \set colsep="|"
1> SELECT id, COUNT(*) FROM syscolumns GROUP BY id
2> go
|id | |
|-----------|-----------|
| 1| 19|
| 2| 23|
...

This is the equivalent of exiting isql, and re-running it with the -c flag
(which is also supported by sqsh).

Redirection and Pipes


How many times have you watched a result set disappear from your screen because
you didn't hit ^S fast enough? Well, no more. Now, any command available in
sqsh may be redirected to/from a file or pipelined to another process. For
example, it is now legal to type:
1> SELECT * FROM sysobjects
2> go | grep test | more

You may also redirect output to files and (if you are careful) can redirect
input from files:
1> select * from sysobjects
2> go 2>/dev/null >/tmp/objects.txt

Aliasing


As of release 1.2, sqsh supports full csh-style command aliasing. Aliasing
provides a mechanism for supplying an alternate name for any given internal
sqsh command, as well as a way of supplying additional argument to any given
command. For example:
1> \alias mo="\go !* | more"
1> SELECT * FROM syspickles
2> mo -h

Is exactly the same as if you had typed:
1> SELECT * FROM syspickles
2> go -h | more

The !* acts as a placeholder that indicates to sqsh that the parameters
supplied to the alias should be inserted at this location. If the !* is not
supplied, the parameters to the alias are appended on the end of the alias
body...

Command Substitution


With the 1.0 release, sqsh is slowly beginning to look more-and-more like a
real shell with the addition of command substitution. This feature allows a
UNIX command to substituted anywhere within a sqsh command or within a SQL
batch simply by placing the command within backquotes (or ` -- this may not
come out to be a backquote depending on which font your web browser is using).
For example:
1> SELECT COUNT(*) FROM `echo syscolumns`
2> go | `echo more`

Currently, sqsh allows a multi-line command within a SQL batch, however this is
not support for command line functions as of yet. For example you can do:
1> SELECT COUNT(*) FROM `echo
2> syscolumns`
3> go

Whereas you cannot do:
1> SELECT COUNT(*) FROM syscolumns
2> go | `echo
more`

Hopefully, in the near future I'll make sqsh smart enough to support
line-continuations with sqsh commands. Believe it or not, it isn't that easy to
do.

Backgrounding And Job Control


Suppose you want to run a long complex query and continue to work while waiting
for the results. With isql, the most effective way to do this was to run two
copies of isql. With sqsh you can now do:
1> SELECT ... /* big nasty select */
2> go &
Job #1 started
1>

After typing 'go &', sqsh launches a child process, which reconnects to the
database and performs the desired query. This is similar to job control within
a standard shell except that, by default, in sqsh the background job's output
will be deferred until the job completes. So when the big nasty query, above,
completes you will see a message like:
1> sp_helptext ...
Job #1 completed (output pending)
2>

and to show the output of the job you can do:
1> \show 1 | more

Once again, the behavior of output deferral may be turned on and off via the
$defer_bg variable.

Sqsh also provides the commonly used job control commands available in such
shells as csh and bash, such as \jobs (to display running jobs) and \kill (to
terminate jobs).

SQL Batch History


Sqsh provides two methods for history control, line-by-line history using
either vi or emacs styles (via ftp://prep.ai.mit.edu/pub/gnu/
readline-2.0.tar.gz), it also provides batch history, so that entire statements
may be re-run or edited:
1> \history
...
(12) SELECT name, id
FROM syscolumns
WHERE name LIKE "%$name%"
(13) SELECT DISTINCT title, type
FROM titles
WHERE title IN
(SELECT title
FROM titles, titleauthor, authors
WHERE titles.title_id = titleauthor.title_id
AND authors.state = "CA")
...

Most commands support a csh-style reference to history entries via '!!', or '!
n'.
1> \vi !!

Configurable Exit Status


One of the major complaints most people have with isql is its inability to
react to or report any sort of error condition generated within a SQL batch.
Sqsh provides a somewhat complex but very flexible for configuring what is
considered an error, which errors are to be displayed, and how to report them
back to the operating system.

Five internal variables are used to control sqsh's behavior to error conditions
reported by SQL Server, $thresh_display, $thresh_fail, $thresh_failcount,
$thresh_exit, and $exit_failcount all of which are configurable at run time as
well as via command line flags. The following briefly outlines these variables
and their relationship to each other:

*$thresh_display
This variable is used to determine at which severity level a SQL Server
message is to be displayed. Setting this to 0 displays all message, and
setting it to 22 suppresses all error messages.
*$thresh_fail
This variable is used by the error handler to determine which severity
levels are considered by sqsh to be error conditions. The next variable
will explain the importance of this value.
*$batch_failcount
Each time sqsh receives an message of a severity level that is considered
an error (determined by $thresh_fail) this value is incremented to keep
track of the total number of batches that have failed.
*$thresh_exit
This variable is used by sqsh to determine how many error conditions may be
encountered before it will exit. In other words, when $batch_failcount is
equal to $thresh_exit, sqsh will abort. Setting the variable to 0 disables
this feature.
*$exit_failcount
If this variable is set to 1 (or On), then sqsh will exit with an operating
system exit status equal to the total number of errors that have been
encountered (the value of $batch_failcount).


Inter-Server BCP


Using the \bcp command, sqsh supports the ability to transfer the result set
from any command batch to another server (or even the same server) via the
Sybase bcp protocol. This feature is particulary nice because current the
standard Sybase bcp program does not support being able to transfer directly
between server, or the ability to specify which rows from the source server are
to be copied.
1> SELECT customer_id, item, SUM(qty)
2> FROM orders
3> GROUP BY customer_id, item
4> \bcp -S SYB_DSS shipping.dbo.order_summary

Starting...
Batch successfully bulk-copied to SQL Server
Batch successfully bulk-copied to SQL Server
Batch successfully bulk-copied to SQL Server
...

The \bcp command can deal with multiple result sets, and thus multiple commands
in a batch or multiple results coming back from a single stored procedure (as
long as the data types in all result sets are identical).

Remote Procedure Calls


With sqsh, it is possible to directly envoke a stored procedure without
resorting to language calls (e.g. "EXEC proc_name ..."). This feature is of
particular interest for controlling and Open Server that does not have language
support built in. For example, to invoke the sp_who stored procedure, simply
run:
1> \rpc sp_who gray
...

Sqsh also supports the ability to place the results of an OUTPUT parameter
directly into a sqsh variable, for example, lets say we create a stored
procedure that like so:
1> CREATE PROCEDURE test_output
2> @x int OUTPUT
3> AS
4> SELECT @x
5> SELECT @x=20
6> go

We may then invoke the test_output procedure like this:
1> \rpc test_output @x:my_x=10

-----------
10
(0 rows affected)
1> \echo $my_x
20

The \rpc command can be a little bit awkward and non-intuitive, so make sure
you read the manual page closely before working with it.

Semicolon "go"


As of release 0.5, sqsh now supports a form of in-line go, via a ; placed
anywhere within the current line, such as:
1> sp_who ;

And, anything that can follow the "go" command may also follow the inline ;
1> sp_who ; | more

Sqsh even attempts to be relatively smart, and ignores semicolons found within
single or double quotes of a single command, although it currently does deal
with semicolons located in comments. Note, in order to turn this feature on,
execute:
1> \set semicolon_hack=1

Simple Scripting


Although sqsh does not have a full flow-of-control language (yet), it is
possible to build simple self-executable scripts using the using #! notation,
and sqsh's support for positional parameters. For example, to create a UNIX
sp_who program, you simply need to create an executable file containing:
#!/usr/local/bin/sqsh -i

sp_who ${1}
go

The ${1} parameter to sp_who will expand to whatever argument is given when the
script is run. Currently sqsh does not support more advanced positional
paramters, such as $* or $@, like most shells.

Multiple Display Styles


Ever get tired of wading through isql's messy output when dealing with very
wide result sets? Sqsh currently supports three separate display styles,
horizontal (standard isql style), vertical, and bcp, that are switchable at any
time while running via the $style variable or by the -m flag to the \go
command.

With the vertical display style, all data is displayed as column/value pairs
virtically down the left side. The style also nicely deals with performing
word-wrapping on very wide text and varchar column outputs.
1> SELECT * FROM my_table
2> go -m vert

int_col: 1
varchar_col: You will notice that both varchar and text columns gracefully
word-wrap and line up with the widest column name.
float_col: 1.23
text_col: This text column would look really hideous on isql's output
but fortunately sqsh make things look great with the vertical
display style!

int_col: 2
varchar_col: Not much text here.
float_col: 3.141592654
text_col:

(2 rows affected)

And, if you want to simply generate a result set that is easily BCP'able into
another server, the bcp display style is for you. This style throws out all
formatting and simply separates all columns by the value of the $colsep
parameter (by default "|").
1> SELECT * FROM my_other_table
2> go -m bcp
1|Scott|11/03/96 12:59:56|0|||
1|Bob|11/19/96 12:59:56|7||32.5|

This mode pretty much only makes sense when redirecting the output to a file
(see Redirection and Pipes, above),

Miscellaneous


The following touches on a more of the less prominent features of sqsh. It is
by no means a comprehensive list, for more details please refer to the manual
page.

*Configurable Prompt Variable
The sqsh prompt is defined by the $prompt variable which is expanded prior
to reading input from the user. Because sqsh keeps track of its current
state in various variables, the prompt can be used to display such
information as the current database, user, line number, etc.
*Named Buffers
In addition to the SQH History Buffer, sqsh also allows the current work
buffer or any history buffer to be copied to and from named buffers for
future use. Named buffers may also be edited and run.
*Reconnection
The SQL Server to which sqsh is connected may be dynamically changed
without exiting using the \reconnect command.
*Configurable Keyword Completion
With GNU Readline support, sqsh adds keyword completion. By default sqsh
will use its internal database of 237 TSQL keywords for tab-keyword
completion. This completion can be configured to be performed in upper
case, lower case, or auto-detect. If the file ~/.sqsh_keywords exists then
the contents of this file is used in place of the internal keyword list.
*Session Locking
Using the \lock command, you may safely walk away from your current sqsh
session without worrying about someone tampering with the database while
you are away.

SQSH Supported Platforms

The following table outlines platforms that sqsh has successfully been compiled
on. In theory each of these platforms should have been compiled painlessly, but
in practice the odder operating systems trend to require a few tweaks. However,
I am always working to make sqsh as easily portable as possible (not always an
easy task).

If you have any additional platforms that you would like to have added to this
list, please send me e-mail, I always interested in hearing what people are
doing with sqsh.
Hardware OS Compiler Comments
------------------- ------------------- --------- ----------------
Sun Sparc 1000 Solaris 2.4 gcc
HP/9000 E35 HP-UX 10.x gcc, cc
HP/9000 755 HP-UX 9.01 ? gcc -static
SGI Indy IRIX 5.x, 6.x cc 3.19 See README.SGI
NCR System 3000 SVR4 cc
Sequent ? Dynix/ptx 2.1.0 ?
? NeXT ?
150Mhz Pentium SCO ? ?
DEC Alpha OSF/1 ? ?
IBM RS/6000 AIX 3.2 gcc -ltermcap, no -ltli
* Sun IPX SunOS 4.1.2 gcc
* Sun Sparc 4c SunOS 4.1.4 gcc
* HP/300 NetBSD 1.1A gcc
* 486DX/50 Linux 1.3.45 gcc

* Indicates that it has been compiled with -DNO_DB turned on, therefore
the actual database access has not been tested, however 99% of Sqsh
has nothing to do with database activity.

And, for those of you that are interested in such things, sqsh is developed
primarily on Linux 1.3.95 with the -DNO_DB flag on (I haven't managed to port
DB-Lib to Linux yet), and tested on a Sun Sparc Server 1000 running Solaris
2.4.

SQSH Licensing Policy

99% of the software that I use is free, therefore I like to give back in kind.
Sqsh is held under the GNU General Public License (GPL) and therefore may be
freely distributed under the terms of this license.
-------------------------------------------------------------------------------

Last Modified on Oct 16, 1996 at 21:24:52 EST by Scott C. Gray

Back to top
-------------------------------------------------------------------------------

9.13: sp_getdays

-------------------------------------------------------------------------------
use sybsystemprocs
go

if object_id("sp_days") is not NULL
drop proc sp_days
go

create proc sp_days @days tinyint OUTPUT, @month tinyint, @year smallint
as
declare @date datetime
select @date=convert(char,@month)+"/01/"+convert(char, @year)
select @days=datediff(dd,@date, dateadd(mm,1,@date))
select @days
go

grant exec on sp_days to public
go

Back to top
-------------------------------------------------------------------------------

9.14: ddl_insert.pl

-------------------------------------------------------------------------------

In order to use this script you must have Sybperl installed -- see Q9.4 for
more information.

#!/usr/local/bin/perl

# Author: Vincent Yin (um...@mctrf.mb.ca) Aug 1994 Last Modified: May 1996

chomp($basename = `basename $0`);

$usage = <<EOF;
USAGE
$basename database userid passwd pattern [ pattern... ]

DESCRIPTION
Prints isql scripts that would insert records into the
tables whose names match any of the patterns in command line. In
other words, this program reverse engineers the data in a given
table(s). Roughly, it `select * from <table>', analyses the data
and table structure, then prints out a bunch of
insert <table> values ( ... )
statements that would re-populate the table. It's an alternative
to `bcp'. `bcp' has its limitations (e.g. one often needs to turn on
'select into/bulk copy' option in the database before running bcp.)

Table names are matched to <pattern> with Transact-SQL's LIKE clause.
When more than one pattern is specified on command line, the LIKE
clauses are OR'ed. In any case, the LIKE clause(s) is logged to
the beginning of the output as a comment, so that you'll see how this
program interprets the command line.

The SQL script is printed to stdout. Since it only prints out the SQL
but doesn't submit it to the SQL server, this procedure is safe to run.
It doesn't modify database in any way.

EXAMPLES
To print this usage page:
% $basename
To print SQL that populates the table master..sysobjects and systypes:
% $basename master userid passwd 'sysobjects' 'systypes'
To print SQL that populates all system tables in master db:
% $basename master userid passwd 'sys%'

BUGS
Embedded line breaks in strings are allowed in Sybase's isql, but not
allowed in SQLAnywhere's isql. So this script converts embedded line
breaks (both DOS styled and UNIX styled) to blank characters.

EOF

$batchsize = 10; # The number of INSERTs before a `go' is issued.
# This is to make the output compact.

# .................... No change needed below this line ........................

use Sybase::DBlib;

die $usage unless $#ARGV >= 3;
($db, $user, $passwd, @pattern) = @ARGV;

$likeclause = &sql_pattern_to_like_clause('name', @pattern);

print <<EOF;
-- This script is created by $0.
-- It would generate INSERT statements for tables whose names match the
-- following pattern:
/* $likeclause
*/

set nocount on
go
EOF

$dbh = new Sybase::DBlib $user, $passwd;
$dbh->{dbNullIsUndef} = 1;
$dbh->dbuse($db);

# Get the list of tables.
$tablelist = $dbh->sql("select name from sysobjects
where type in ('S','U') and $likeclause
order by name
");

foreach $tableref (@$tablelist) {
$table = @$tableref[0];
print "\n\n/*.............. $table ...............*/\n";
print "-- ", `date`, "\n";
print "declare \@d datetime\n";
print "select \@d = getdate()\n";
print "print ' %1! $table', \@d\ngo\n\n";
print "truncate table $table -- Lookout !!!!!!\ngo\n\n";

$dbh->dbcmd("select * from $table");
$dbh->dbsqlexec;
$dbh->dbresults;

while (@row = $dbh->dbnextrow()) {
print "insert $table values(";
for ($i=0; $i <= $#row; $i++) { # build the INSERT statement
# Analyse datatype to decide if this column needs to be quoted.
$coltype = $dbh->dbcoltype($i+1);
if (!defined($row[$i])) {
print 'NULL'; # Never quote NULL regardless of datatype
}
elsif ($coltype==35 or $coltype==39 or $coltype==47 or
$coltype==58 or $coltype==61 or $coltype==111 ){
# See systypes.type/name for explanation of $coltype.
$row[$i] =~ s/\r|\n/ /g; # Handles both DOS and UNIX line breaks
$row[$i] =~ s/"/""/g; # Stuff double quotes
print "\"" . $row[$i] . "\"";
}
else {
print $row[$i];
}
print ", " unless $i == $#row;
}
print ")\n"; # wrap up the INSERT statement.
# print `go' at every $batchsize interval.
print "go\n" unless $dbh->DBCURROW % $batchsize;
}
print "\ngo\n\n"; # print a `go' after the entire table is done.
print "-- ### End for $table: rowcount = ", $dbh->DBCURROW, "\n";
}

# ................................. sub ........................................
sub main'sql_pattern_to_like_clause {
local($field_name, @pattern) = @_;
$like_clause = "\t( 1 = 0 ";
foreach (@pattern) {
$like_clause .= "\n or $field_name like '" . $_ . "' ";
}
$like_clause .= "\n\t) \n";
}

Back to top
-------------------------------------------------------------------------------

9.15: sp_ddl_create_table

-------------------------------------------------------------------------------
use master
go

drop proc sp_ddl_create_table
go

create proc sp_ddl_create_table
as

-- Creates the DDL for all the user tables in the
-- current database

select right('create table ' + so1.name + '(' + '
', 255 * ( abs( sign(sc1.colid - 1) - 1 ) ) )+
sc1.name + ' ' +
st1.name + ' ' +
substring( '(' + rtrim( convert( char, sc1.length ) ) + ') ', 1,
patindex('%char', st1.name ) * 10 ) +
substring( '(' + rtrim( convert( char, sc1.prec ) ) + ', ' + rtrim(
convert( char, sc1.scale ) ) + ') ' , 1, patindex('numeric', st1.name ) *
10 ) +
substring( 'NOT NULL', ( convert( int, convert( bit,( sc1.status &
8 ) ) ) * 4 ) + 1, 8 * abs(convert(bit, (sc1.status & 0x80)) - 1 ) ) +
right('identity ', 9 * convert(bit, (sc1.status & 0x80)) ) +
right(',', 5 * ( convert(int,sc2.colid) - convert(int,sc1.colid) ) ) +
right(' )
' + 'go' + '
' + '
', 255 * abs( sign( ( convert(int,sc2.colid) - convert(int,sc1.colid) ) ) -
1 ) )
from sysobjects so1,
syscolumns sc1,
syscolumns sc2,
systypes st1
where so1.type = 'U'
and sc1.id = so1.id
and st1.usertype = sc1.usertype
and sc2.id = sc1.id
and sc2.colid = (select max(colid)
from syscolumns
where id = sc1.id)
order by so1.name, sc1.colid
go

grant execute on sp_ddl_create_table to public
go

Back to top
-------------------------------------------------------------------------------

9.16: int.pl

-------------------------------------------------------------------------------

Background

Please find included a copy of int.pl, the interfaces file conversion tool. It
should work with perl 4 and 5, but some perl distributions don't seem to
support gethostbyname which you need for the solaris, ncr, and vms file format.

You may need to adjust the first line to the path of perl on your system, and
may need to set the PERLLIB environment variable so that it finds the
getopts.pl module.

While it may not be 100% complete (e.g. it ignores the timeout field) you're
free to add any functionality you may need at your site.
int.pl -h will print the usage, typical invocation is
int.pl -f sun4-interfaces -o sol > interfaces.sol

Note that I can't offer any kind of support, but I welcome comments, feedback,
improvements, bug fixes, etc., just m...@beasys.com The usual disclaimers apply.

Also, let me know whether it made your job easier, how often you use it, and
how much time you save by using it.

Code

#!/usr/local/perl/bin/perl

# $Date: 2000/06/09 21:45:01 $ - $Author: dowen $
# $Id: section9.html,v 1.3 2000/06/09 21:45:01 dowen Exp $

# convert a sun4 interfaces file to a different format (see @modelist)
# limitations:
# - does not handle tli/spx entries (yet)
# - drivers for desktop platform hard coded
# - no sanity checks (duplicate names, incomplete entries)
# - ignores extraneous tokens silently (e.g. a 6th field)
# - don't know whether/how to convert decnet to tli format
# - ???

require "getopts.pl";

sub usage
{
local(@token) = @_;

if (!($token[0] eq "short" || $token[0] eq "long"))
{
printf STDERR "Environment variable(s) @token not defined.\n";
exit (1);
}

print STDERR <<EOM;
Usage: $progname -f <sun4 interfaces file>
-o { $modetext1 }
[-V] [-v] [-h]
EOM

if ($token[0] eq "long")
{
print STDERR <<EOM;
where
-f <file> input file to process
-o <mode> specify output mode
(e.g. $modetext2)
-V turn on verbose mode
-v print version string
-h print this message
EOM
}
else
{
print STDERR "For more details run $progname -h\n";
}
exit(1);
} # end of usage


# FUNCTION NAME: parse_command_line
# DESCRIPTION: call getopts and assign command line arguments or
# default values to global variables
# FORMAL PARAMETERS: none
# IMPLICIT INPUTS: command line arguments
# IMPLICIT OUTPUTS: $inputfile, $mode, $verbose
# RETURN VALUE: none, exits (in usage) if -h was specified
# (help option).
# SIDE EFFECTS: none
#
sub parse_command_line {
&Getopts("f:o:hvV") || &usage("short");
$inputfile = $opt_f;
$mode = $opt_o;
$verbose = $opt_V ? 1 : 0;

print("$progname version is: $version\n"), exit 0 if $opt_v;
&usage("long") if $opt_h;
&usage("short") if ! $inputfile || ! $mode;
&usage("short") if ! grep($mode eq $_, @modelist);
} # end of parse_command_line

# FUNCTION NAME: process_file
# DESCRIPTION: parse file, try to convert it line by line.
# FORMAL PARAMETERS: $file - file to process
# IMPLICIT INPUTS: none
# IMPLICIT OUTPUTS: none
# RETURN VALUE: none
# SIDE EFFECTS: none

sub process_file {
local($file) = @_;
open(INPUT, "<$file") ||
die "can't open file $file: $!\nExit.";
local($line) = 0;
local($type, $prot, $stuff, $host, $port, $tmp);
print $os2_header if $mode eq "os2";
while (<INPUT>)
{
$line++;
# handle empty lines (actually lines with spaces and tabs only)
#print("\n"), next if /^\s*$/;
next if /^\s*$/;
chop;
# comments, strip leading spaces and tabs
s/^\s*//, print("$_$lf{$mode}\n"), next if /^\s*#/;
#s/^\s*//, next if /^\s*#/;

# server names
if (/^\w+/)
{
if ($mode eq 'sol' || $mode eq 'ncr'
|| $mode eq 'vms' || $mode eq 'nw386')
{
print "$_$lf{$mode}\n";
next;
}
elsif ($mode eq "os2")
{
$server = $_;
next;
}
else {
print "[$_]$lf{$mode}\n" if !(/SPX$/);
next;
}
}

if (/^\tmaster|^\tquery|\tconsole/)
{
# descriptions
# parse first whitespace delimited word and
# following space(s)
# quietly ignore any extraerraneous characters
# I actually tried to catch them, but - believe
# it or not - perl would chop off the last digit of
# $port. vvvv
# /^\t(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\d+)(.+)$/;
if (!(($type, $prot, $stuff, $host, $port) =
/^\t(\S+)\s+(\S+)\s+(\S+)\s+(\S+)\s+(\S+)/))
{
print STDERR "line $line: unknown format: $_";
next;
}
#print ("line $line: more than 5 tokens >$etc<, \n"),
# next if $etc;
if (!($type eq "master" || $type eq "query"
|| $type eq "console"))
{
# unknown type
print STDERR "line $line: unknown type $type\n";
next;
}
if ($prot eq 'tli')
{
#print STDERR "line $line: can't handle tli",
# " entries (yet)\n";
# adjust to tli format
($layer, $prot, $device, $entry) =
($prot, $stuff, $host, $port);
print "\t$type tli $prot $device ",
"$entry$lf{$mode}\n" if $mode ne 'win3';
next;
}
if (!($prot eq "tcp" || $prot eq "decnet"))
{
# unknown protocol
print STDERR
"line $line: unknown protocol $prot\n";
next;
}
if ($mode eq 'sol' || $mode eq 'ncr' || $mode eq 'nw386')
{
$ip = &get_ip_address($host, 'hex');
$hexport = sprintf("%4.4x", $port);
print "\t$type tli $prot $device{$prot} \\x",
"$prefix{$mode}$hexport$ip$nulls{$mode}\n";
next;
}
if ($mode eq 'vms')
{
$ip = &get_ip_address($host, 'dot');
print "\t$type $prot $stuff $ip $port\n";
next;
}
if ($mode eq 'nt386')
{
$type =~ tr/a-z/A-Z/;
print "\t$type=$sock{$mode},$host,",
"$port$lf{$mode}\n";
next;
}
if ($mode eq 'dos' || $mode eq 'win3')
{
next if $type ne "query";
print "\t${mode}_$type=$sock{$mode},",
"$host,$port$lf{$mode}\n";
next;
}
if ($mode eq 'ntdoswin3')
{
($tmp = $type) =~ tr/a-z/A-Z/;
# watch out for this local($mode) !!
# its scope is this BLOCK only and
# (within this block) overrides the
# other $mode!!! But we can still access
# the array %sock.
local($mode) = 'nt386';
print "\t$tmp=$sock{$mode},$host,$port",
"$lf{$mode}\n";
next if $type ne "query";
$mode = 'dos';
print "\t${mode}_$type=$sock{$mode},",
"$host,$port$lf{$mode}\n";
$mode = 'win3';
print "\t${mode}_$type=$sock{$mode},",
"$host,$port$lf{$mode}\n";
next;
}
if ($mode eq 'os2')
{
print " \"$server\" \"$type\" \"$sock{'os2'}",
",$host,$port\"\n";
next;
}
}
printf STDERR "line $line is ->%s<-\n", chop($_);
}
close(INPUT);
print $os2_tail if $mode eq "os2";

} # end of process_file

# FUNCTION NAME: print_array
# DESCRIPTION: print the array
# FORMAL PARAMETERS: *array - array to be printed, passed by reference
# IMPLICIT INPUTS: none
# IMPLICIT OUTPUTS: none
# RETURN VALUE: none
# SIDE EFFECTS: none
#
sub print_array {
local(*array) = @_;
foreach (sort keys %array)
{
printf STDERR "%-16s %s\n", $_, $array{$_};
}

} # end of print_array

# FUNCTION NAME: get_ip_address
# DESCRIPTION: get the ip address of a host specified by name, return
# it as a string in the requested format, e.g.
# requested format == 'dot' --> return 130.214.140.2
# requested format == 'hex' --> return 82d68c02
# In order to avoid repeated calls of gethostbyname with
# the same host, store (formatted) results of gethostbyname
# in array %map.
# FORMAL PARAMETERS: name of host, requested return type: hex or dot format
# IMPLICIT INPUTS: %map
# IMPLICIT OUTPUTS: none
# RETURN VALUE: ip address
# SIDE EFFECTS: maintains %map, key is host name, value is ip address.
#
sub get_ip_address {
local($host, $mode) = @_;
if (!$map{$host})
{
#print "calling gethostbyname for $host";
($name, $aliases, $addrtype, $length, @addrs) =
gethostbyname($host);
$map{$host} = join(".", unpack("C4", $addrs[0]));
if ($mode eq 'hex')
{
$map{$host} = sprintf("%2.2x%2.2x%2.2x%2.2x",
split(/\./, $map{$host}));
}
#print " - $map{$host}\n";
}
return $map{$host};
} # end of get_ip_address


$version = "\$Id: section9.html,v 1.3 2000/06/09 21:45:01 dowen Exp $";
$| = 1;
($progname = $0) =~ s#.*/##g;
@modelist = ('sol', 'ncr', 'vms', 'nw386', 'os2',
'nt386', 'win3', 'dos', 'ntdoswin3');
$modetext1 = join('|', @modelist);
$modetext2 = join(', ', @modelist);

# tli on solaris needs more zeroes
$nulls{'sol'} = "0000000000000000";
$nulls{'nw386'} = "0000000000000000";
$nulls{'ncr'} = "";
$nulls{'nt386'} = "";

# prefix for tli entries
$prefix{'sol'} = "0002";
$prefix{'nw386'} = "0200";
$prefix{'ncr'} = "0002";
$prefix{'nt386'} = "0200";

# protocol devices
$device{'tcp'} = '/dev/tcp';
$device{'spx'} = '/dev/nspx';
$device{'decnet'} = '/dev/tcp';

# socket driver names
$sock{'nt386'}="NLWNSCK";
$sock{'dos'}="NLFTPTCP";
$sock{'win3'}="WNLWNSCK";
$sock{'os2'}="nlibmtcp";

# linefeed's (^M) for the MS world
$lf{'nt386'}="
";
$lf{'dos'}="
";
$lf{'win3'}="
";
$lf{'ntdoswin3'}="
";
$lf{'os2'}="";
$lf{'vms'}="";
$lf{'sol'}="";
$lf{'ncr'}="";
$lf{'nw386'}="";

$os2_header = sprintf("STRINGTABLE\nBEGIN\n%s", " \"\"\n" x 10);
$os2_tail = "END\n";

&parse_command_line;
&process_file($inputfile);
&print_array(*map) if $verbose;

Back to top
-------------------------------------------------------------------------------

BOB OBST

unread,
Jan 25, 2001, 1:08:49 AM1/25/01
to

Tamanna Husain

unread,
Jan 31, 2001, 8:10:35 PM1/31/01
to
Alrite... i may be posting this message in the wrong place. This is my
first time using this newsgroup service.
I had a question in general... silly really... Why would anyone want to
create pdf files and pay for this service 10 bucks a month .. when they
can use any other programs like MS Office Suite..?

Sharmee

------------------------------------------------------------------------------

Anthony Mandic

unread,
Jan 31, 2001, 8:30:00 PM1/31/01
to
**** Post for FREE via your newsreader at post.usenet.com ****

Tamanna Husain wrote:

> when they can use any other programs like MS Office Suite..?

What's MS Office Suite? Is that anything like StarOffice?

-am

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
*** Usenet.com - The #1 Usenet Newsgroup Service on The Planet! ***
http://www.usenet.com
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

Rob Verschoor

unread,
Feb 1, 2001, 5:55:37 AM2/1/01
to
"Tamanna Husain" <w10...@mail.wright.edu> wrote in message
news:Pine.GSO.4.31.01013...@unixapps1.wright.edu...

> Alrite... i may be posting this message in the wrong place. This is
my
> first time using this newsgroup service.
> I had a question in general... silly really... Why would anyone want
to
> create pdf files and pay for this service 10 bucks a month .. when
they
> can use any other programs like MS Office Suite..?
>
> Sharmee

You're right: this is definitely the wrong newsgroup for your
question. For PDF-related issues, I recommend you use the
"comp.text.pdf" newsgroup instead.

Rob V.


0 new messages