Questions about SAP ASE 16 (formerly, Sybase)

55 views
Skip to first unread message

Nicola

unread,
Aug 4, 2021, 7:21:38 AM8/4/21
to
I am starting a separate thread about this:

> 1. Visit this page
> __ https://help.sap.com/viewer/product/SAP_ASE/16.0.4.0/en-US?task=whats_new_task
> 2. Select [ Download PDFs ] at top right
> 3. Choose the manuals you want, and download them.

Thanks, got them already.

> Feel free to ask me questions.

What page size (2/4/8/16 KB) and what type of workload (Mixed/OLTP)
do you configure?

What are the strictly technical reasons to prefer ASE over SQL Server?
Or SQL Server over ASE?

How about this historical assessment:

https://dbdb.io/db/adaptive-server-enterprise

Buggy release, mismanagement… Doesn't sound like a product to go after ;-)

Nicola

Derek Ignatius Asirvadem

unread,
Aug 4, 2021, 10:56:18 PM8/4/21
to
> On Wednesday, 4 August 2021 at 21:21:38 UTC+10, Nicola wrote:
> I am starting a separate thread about this:
>
> > 1. Visit this page
> > __ https://help.sap.com/viewer/product/SAP_ASE/16.0.4.0/en-US?task=whats_new_task
> > 2. Select [ Download PDFs ] at top right
> > 3. Choose the manuals you want, and download them.
>
> Thanks, got them already.
>
> > Feel free to ask me questions.

Whoa. Evidently you have missed the context. You said you downloaded Sybase, and that you wanted to benchmark and confirm some declarations that I made, while re-framing it as "claims". I said I encourage any academic moving from their isolated tiled room, into the real world. I gave some cautions, I offered assistance, in that regard. That is to get you merrily on your way, on your stated task.

My invitation should be not be construed as "feel free to ask me questions about anything you want"

Further, you have to do some work and climb the learning curve yourself.

> What page size (2/4/8/16 KB) and what type of workload (Mixed/OLTP)
> do you configure?

You have found the manuals. You need to find the SAP Developers Forum or network. Questions of this type, which have many considerations, are discussed and answered there. There are a few SAP/Sybase engineers there, who answer questions.

SG was a Sybase Partner for 17 years, right up to the acquisition by SAP. I was active in the previous Sybase technical forum, and there are many posts (with full discussion) on this and related subjects. You could have just googled the subject plus my name, and you would get your answer. That was 20 years ago. Truth does not change, the answers are still valid.

After SAP, that forum has been removed from the internet, it is now archived, in the new SAP developer forum (you can no longer google for the subject matter). Please go there (use the search facility) and read the discussion for considerations, and choose from the answers. Ask away.

Sorry, I do not have the time to post duplicate info, or to get into yet another open discussion, about closed subjects.

On the other hand, if you have a specific question re your stated project, I would be happy to help. No discussion, just the answer.

> What are the strictly technical reasons to prefer ASE over SQL Server?
> Or SQL Server over ASE?

Ditto.

> How about this historical assessment:
>
> https://dbdb.io/db/adaptive-server-enterprise

These days, anyone with a keyboard; two fingers and a bit of tissue connecting those two fingers, can post articles on the internet. They do not have to have any grey cells in that connective tissue, or any actual experience about the subject. Then there is a host of academics, who post negative articles against anything they do not understand, which as you have seen in the last ten years, is an awful lot, due to their proud isolation from reality. Such people, both ordinary idiots, and academically qualified idiots, are unskilled, and worse, unaware that they are unskilled. They think themselves skilled.

There are a good few scientific papers on the subject (as distinct from the filth that academics in this field produce). This is required reading for anyone who takes my courses. It defines the state of academics in this field, as evidenced in our threads.
__ Unskilled and Unaware 1999, Krüger & Dunning
____ https://www.softwaregems.com.au/Documents/Reference/Unskilled%20%26%20Unaware%201999.pdf
__ Unskilled and Unaware 2008, follow-up, responding to the attacks
____ https://www.softwaregems.com.au/Documents/Reference/Unskilled%20%26%20Unaware%202008.pdf

Hence the internet is a cesspool, and as a number of people have stated, it is rare to find a genuine authority on a subject, where answers are direct and permanent, instead of endless discussion without resolution.

Choose what you read carefully.

> Buggy release, mismanagement… Doesn't sound like a product to go after ;-)

You are right. Drop it. Go back to your freeware herd, the hundreds of programs that pretend to be a server, that pervert SQL, that re-define concepts in order for their broken implementation to appear compliant. Given your recent questions, and your difficulty installing and running commercial products, you will be so much happier there. No need to obtain actual experience in the real world, just rail against it from the safety of the tiled cell.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 6, 2021, 4:51:19 AM8/6/21
to
> On Thursday, 5 August 2021 at 12:56:18 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Wednesday, 4 August 2021 at 21:21:38 UTC+10, Nicola wrote:
>
> > What page size (2/4/8/16 KB) and what type of workload (Mixed/OLTP)
> > do you configure?

> On the other hand, if you have a specific question re your stated project, I would be happy to help. No discussion, just the answer.

In case you wish to leave the asylum for an hour or so, for a picnic in the park, to enjoy the sunshine, and perhaps actually produce something on the shiny new SAP/Sybase Server ... if I rephrase your question such that it is pertinent and direct, such as:

< < What page size (2/4/8/16 KB) do you recommend that I configure ?

Because this is a permanent physical article, that precedes installation, for what I expect is best for you, eg. I expect you to stress the server with benchmarks and large Transactions, as well as a mixed OLTP+OLAP load = 4KB.

If you are on Unix/Linux, make sure you create only Raw Partitions for all Devices, never filesystem files.

Cheers
Derek

Nicola

unread,
Aug 8, 2021, 8:24:39 AM8/8/21
to
On 2021-08-06, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
> if I rephrase your question such that it is pertinent and direct, such
> as:
>
>< < What page size (2/4/8/16 KB) do you recommend that I configure ?

> Because this is a permanent physical article, that precedes
> installation, for what I expect is best for you, eg. I expect you to
> stress the server with benchmarks and large Transactions, as well as
> a mixed OLTP+OLAP load = 4KB.

Ok, thanks. I had inferred from your documents that your benchmarks were
done on systems configured with 2 KB pages, so I was wondering whether
(that is the case and) you had a point to always prefer smaller
pages to larger ones.

> If you are on Unix/Linux, make sure you create only Raw Partitions for
> all Devices, never filesystem files.

Ok.

Nicola

Derek Ignatius Asirvadem

unread,
Aug 9, 2021, 4:58:39 AM8/9/21
to
> On Sunday, 8 August 2021 at 22:24:39 UTC+10, Nicola wrote:
> > On 2021-08-06, Derek Ignatius Asirvadem wrote:
> > if I rephrase your question such that it is pertinent and direct, such
> > as:
> >
> >< < What page size (2/4/8/16 KB) do you recommend that I configure ?
>
> > Because this is a permanent physical article, that precedes
> > installation, for what I expect is best for you, eg. I expect you to
> > stress the server with benchmarks and large Transactions, as well as
> > a mixed OLTP+OLAP load = 4KB.
>
> Ok, thanks. I had inferred from your documents that your benchmarks were
> done on systems configured with 2 KB pages,

Yes.
TPC requires small fast Transactions.

> so I was wondering whether
> (that is the case and)

Yes.

> you had a point to always prefer smaller
> pages to larger ones.

It is not a "preference", it is a scientifically determined article. We have over 250 configuration parms, most of which are related to each other, additionally all resources [look into the named caches] are configured based on memory; etc. We use a spreadsheet and everything. Even for pure P&T assignments, I publish the spreadsheet as an appendix in the final Before & After report.

Yes.
- For myself.
- For servers & databases that I rebuild for customers AFTER writing a Version2 system for them, which means the entire app, and small fast Transactions in stored procs only.

For servers & databases that I rebuild but not rewrite, ie. no change to the app or the database logically, no. I leave the pagesize as is.

For correction of mistakes (eg. customer has moved from 2KB to 4KB to 8KB to ease Transaction problems, which is a stupid thing to do, as it has no effect on the problem because it is not the cause of the problem), as part of the server rebuild, I determine what it should be for their usage; load, vs their physical resources, and implement that. No, you can't have the spreadsheet.

Most Sybase customers are on 4KB.

High-end OLTP customers (serviced by high-end guys such as me) are on 2KB. Ie. 2KB has never been an issue.

Some are on 8KB for a mixed load.

16KB is for OLAP only.

Advice for you:
> > Because this is a permanent physical article, that precedes
> > installation, for what I expect is best for you, eg. I expect you to
> > stress the server with benchmarks and large Transactions,

Guaranteed that you will jack around with huge Transactions.
Guaranteed you will think the problem is 2KB pagesize.
Guaranteed that you will not listen to reasons, as to why that is false.
Argument avoided.

> > as well as
> > a mixed OLTP+OLAP load = 4KB.

Regardless of what you think, that will be the the best for the general load, over any duration of time.

Cheers
Derek

Nicola

unread,
Aug 28, 2021, 2:46:23 PM8/28/21
to
Derek,
I've gone full speed with some benchmark scripts up and running against
a pretty default ASE installation. I have created two devices, one for
databases ("userdbdev") and one for the transaction log ("userlogdev").
My test database was created with:

create database scratch on userdbdev = '1g' log on userlogdev = '1g'

After several runs of my scripts, I have started to get this error:

The transaction log in database scratch is almost full. Your transaction
is being suspended until space is made available in the log.

1. How do I flush the transaction log?
2. How do I monitor the size of the transaction log?
3. How do I avoid the above message in the first place?

Nicola

Derek Ignatius Asirvadem

unread,
Aug 28, 2021, 10:53:00 PM8/28/21
to
Nicola

> On Sunday, 29 August 2021 at 04:46:23 UTC+10, Nicola wrote:
> Derek,
> I've gone full speed with some benchmark scripts up and running

Excellent.

> against
> a pretty default ASE installation.

Bad idea. The one (benchmark) contradicts the other (simple default db).

You will be better off:
1. getting a default db geared up to a real db
__ non-default; proper distribution of tables across devices;
2. tranlog set up for purpose; etc);
__ depending on db being (a) Development (no recovery) to ... ((g) high-end Production (instant recovery)
__ [a] can be done via config ... somewhere about [c] you need a stored proc for aut-dump-when-full
3. set up monitoring (which you need for diagnostics)
__ loop between [1][2][3] until you have no problems
4. and then benchmarking, which will use [3].

So count this as [1], with those setps in mind.

> I have created two devices, one for
> databases ("userdbdev") and one for the transaction log ("userlogdev").
> My test database was created with:
>
> create database scratch on userdbdev = '1g' log on userlogdev = '1g'

Question, have you read the manuals and understood how the tranlog works ?

a. a newbie would set the tranlog at 10% the data size. (not scientific, but ok)
b. the tranlog needs to be dumped regularly, based on what/how/where you wish to obtain recovery. It is not a simple or binary decision.
c. for a benchmark or production you need an purpose-written sp (because it uses site-specific resources and setting)
d. I have high-activity Production dbs with 500gb data and 100mb tran-log. Ie. the tran-log size is dependent on (i) activity, and (ii) max tran size [which should be small], and (iii) how it gets dumped, which is that sp.

Eg. mine are set up for transfer to an DisasterRecovery db (for cut-over if DR ever happens), so the tranlog gets dumped far more frequently than necessary per activity.

For plain per-activity dumping, for a benchmark, which means somewhat geared up, but not full Production, use a tranlog size of 100mb or 200mb. And deal with i the issues, and get it bedded down, so that the benchmark simulates a real world scenario. (That is what I do for a true benchmark.)

> After several runs of my scripts,

> I have started to get this error:
>
> The transaction log in database scratch is almost full. Your transaction
> is being suspended until space is made available in the log.

But that is reporting the tran-log in db_name “scratch” not “userdbdev” is full.

That is correct operation, but you are missing the set-up, and of course understanding.

You need to read:
_ minimum: Commands manual for DUMP DB and DUMP TRAN, there are good sections for understanding
_ better: System Admin Guide 1, complete manual
_ minimum: System Admin Guide 1, ch 3 (1p4): READ THIS RIGHT NOW
_ minimum: System Admin Guide 1, ch 8 (10p): READ THIS RIGHT NOW

In Unix, everything is case-sensitive.
SQL is case-insensitive, except for object names. The convention is to use uppercase for commands.

> 1. How do I flush the transaction log?

After ANY db is set up, the first thing is, you need a DB-DUMP, to recover the whole DB if necessary.

You must ensure that the logsegment is actually on the log-device, and not on the data-devices.

> “flush”

It depends on exactly what you want ...

1> -- [A] Proper manual DB-DUMP, file_path needs to have space-used-in-db
2> DUMP DATABASE <db_name> to <db_file_path>
3> GO


1> -- [B] Proper manual TRAN-DUMP (very first step for understanding):
2> -- file_path needs to have space-used-in-tran-log
3> DUMP TRAN <db_name> to <log_file_path>
3> GO

But you are much better off defining DUMP-DEVICES.

1> -- [C] When you get 1105 or “log suspended”, it is usually too late, so you need:
2> DUMP TRAN <db_name> WITH TRUNCATE_ONLY
3> -- or worse:
4> DUMP TRAN <db_name> WITH NO_LOG
3> GO

Which clears the 1105 or “log suspend”, but now recovery on that db is from a DB-dump-file only. So you must DUMP DATABASE to produce a fresh one.

1> -- [D] Development db: set db_name to TRUNCATE tranlog automatically
2> -- tranlog can be very small
1> USE userdbdev
2> GO
1> master..sp_dboption userdbdev, "trunc log on chkpt", true
2> GO
1> CHECKPOINT
2> GO

==[E] Set up any db (C) to be automatically tran-log dumped when “full”. That means thresholds on the logsegment, and a threshold stored proc to be executed when the threshold is reached. There can be more than 1 threshold, more than one escape determination. Eg. for tempdb, I TRUNCATE; for production I DUMP, for emergency in tempdb, I kill tasks (which clears their data usage).

Truncate means truncate, not dump. Therefore TRANSACTION recovery is no longer possible. Which means only DATABASE recovery possible (LOAD DATABASE <db_name> FROM <db_file_path>).

If all goes well, even reasonably, you will never have to LOAD TRAN FROM <log_file_path>. It will be DUMPED or TRUNCATED as planned, and they will be in a planned location if need be, but never get used.

Mostly, you will be doing things in a DB, and then DUMP DATABASE at the close of the DDL set up, before the activity (such as a benchmark).

When you have the benchmark tables full (eg. some set up as planned), you might DUMP DB before hammering it with UPDATE Xacts.

> 2. How do I monitor the size of the transaction log?

1> USE <db_name>
2> GO
1> sp_helpdb <db_name>
2> GO

(The nuance between USE vs <db_name>..sp_something will be understood later.)

3> sp_helplog

3> sp_helpsegment -- without parms

3> sp_helpsegment logsegment

3> sp_helpthreshold -- without parms

3> sp_helpthreshold logsegment

> 3. How do I avoid the above message in the first place?

(Sorry, I have the details in the reverse order. See above.)

You can’t work from the err msg backwards.

Sybase is not a toy database server (PoopGres is a toy non-server). It takes database recovery seriously. Eg. you might have just LOADed a production database, into a disaster recovery or UAT database container, and have started using it, and your log is not set up (accident). So that tran-log is actually required for recovery, and must not be lost. Hence it is operating correctly.

You have to take responsibility as a Sybase DBA (welcome to the club).

Any and all config and settings re recovery must be made explicitly.

The first elemental thing a Sybase DBA has to do, is manage the tran-logs of ALL dbs. (For context, the second elemental thing is to manage tempdb usage.) Sybase provides various options, and methods. The problems can be eliminated/reduced by Standards; good practice; etc. And that too, depends on (a) proper education, (b) extent of need per database, and (c) your ability to code auto-administration scripts and stored procs. Which avoids manual commands upon problem situations arising. Starting with the notion that you can just install Sybase and code a benchmark will be a disaster, I expected that you would read at leaset the SAGs, and get to know what a real server does, how to administer it.

Otherwise the problems will be exposed via err msgs, and then you are reacting, and reacting without set up or knowledge. And this media is not the way to help you reasonably.

Generally, you have to set up, in this order:

1. think about how you want your DATA vs LOG distributed.

2. think about how you want your RECOVERY to be done (Dev vs Benchmark vs Production).

3. set up DEVICES accordingly
__ I recommend (instead of your 100gb), which must be RAW PARTITIONS (not filesystem-files):
____ 8 x 128MB for DATA (for parallelism)
____ 1 x 128MB for Non-Clustered Indices
____ 1 x 128MB for TRAN-LOG for EACH database SEPARATELY

4. set up your DATABASES accordingly, ala sp_dboption

5. either
__ (D) no recovery (Dev: TRUNC-LOG-ON-CHKPT) or
__ (A) reasonable recovery (Benchmark) or
__ (B) minimum-production-recovery (Production)

6. for (A)(B), set up a threshold at least on the logsegment, to automatically DUMP TRAN

----
> a pretty default ASE installation.

Bad idea, if it is intended for a benchmark. Fine, if it is intended for dev; learning. Then you must read the SAGs, get some experience, including handling situations such as this, and then sp_configure each and every config parm that is deemed relevant to a reasonable benchmark config.

Same as setting up your DEVICES correctly. If you don’t, the benchmark will be meaningless.

----
General
1. For admin tasks, most people use a 3rd party DBA Admin tool, such as DBArtisan.

Sybase does have a free DB Admin tool, free with a licence for ASE, a very nice GUI (not as amazing as DBArtisan which I have), but I don’t know if it is included in teh dev version (“express edition”). It is called something like “enterprise manager”, look for it.

If you don’t, you are operating 35 years behind other Sybase DBAs, at isql + scripts level.

2. For SQL coding, most people use an IDE, such as SQLProgrammer. There are many.

When I go on short assignments, if the cust does not have [1][2], I download and use Komodo.

3. You have to learn and understand DEVICES and DATA STRUCTURES. Otherwise the benchmark would be nonsense.
First, the SAGs.
Second, a logical overview:
__ https://www.softwaregems.com.au/Documents/Article/Sybase%20Data%20Storage/1%20Data%20Storage%20Unit.html

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 29, 2021, 3:57:19 AM8/29/21
to
Nicola

> On Sunday, 29 August 2021 at 12:53:00 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Sunday, 29 August 2021 at 04:46:23 UTC+10, Nicola wrote:

> > create database scratch on userdbdev = '1g' log on userlogdev = '1g'

> c. for a benchmark or production you need an purpose-written sp (because it uses site-specific resources and setting)
> d. I have high-activity Production dbs with 500gb data and 100mb tran-log. Ie. the tran-log size is dependent on (i) activity, and (ii) max tran size [which should be small], and (iii) how it gets dumped, which is that sp.


> But you are much better off defining DUMP-DEVICES.

> 1> -- [C] When you get 1105 or “log suspended”, it is usually too late, so you need:
> 2> DUMP TRAN <db_name> WITH TRUNCATE_ONLY
> 3> -- or worse:
> 4> DUMP TRAN <db_name> WITH NO_LOG
> 3> GO
>
> Which clears the 1105 or “log suspend”, but now recovery on that db is from a DB-dump-file only. So you must DUMP DATABASE to produce a fresh one.

> ==[E] Set up any db (C) to be automatically tran-log dumped when “full”. That means thresholds on the logsegment, and a threshold stored proc to be executed when the threshold is reached. There can be more than 1 threshold, more than one escape determination. Eg. for tempdb, I TRUNCATE; for production I DUMP, for emergency in tempdb, I kill tasks (which clears their data usage).


> 3> sp_helpthreshold -- without parms
>
> 3> sp_helpthreshold logsegment

> You have to take responsibility as a Sybase DBA (welcome to the club).
>
> Any and all config and settings re recovery must be made explicitly.

> Generally, you have to set up, in this order:
>
> 1. think about how you want your DATA vs LOG distributed.
>
> 2. think about how you want your RECOVERY to be done (Dev vs Benchmark vs Production).
>
> 3. set up DEVICES accordingly
> __ I recommend (instead of your 100gb), which must be RAW PARTITIONS (not filesystem-files):
> ____ 8 x 128MB for DATA (for parallelism)
> ____ 1 x 128MB for Non-Clustered Indices
> ____ 1 x 128MB for TRAN-LOG for EACH database SEPARATELY

There is no problem to have a set of DATA DEVICES for all dbs, ie. shared, or separate DEVICES for each db, or a mix. You don’t want your raw partitions dropped/created all the time, but you do want to drop/create dbs. You do want several DATA DEVICES for each db, definitely not one, because that would strangle the I/O through a single [Disk I/O Structure]. That is not parallelism but simple load spreading instead of strangling.

For parallelism, one does not/ /need// separate DEVICES for each TABLE-PARTITION, but of course for best parallelism, each TABLE-PARTITION should be on a separate DATA DEVICE. Do not expect mickey mouse parallelism (spread across machines !!!) here, figure out what genuine parallelism is.

But for the TRAN-LOG of each db, they should be separate LOG DEVICES, not shared.

Eg:
CREATE DATABASE SCRATCH ON
____ DATA_DEV_1 = 32MB,
____ DATA_DEV_1 = 32MB,
____ DATA_DEV_1 = 32MB,
____ DATA_DEV_1 = 32MB
__ LOG ON
____ LOG_DEV_1 = 16MB

CREATE DATABASE FOO ON
____ DATA_DEV_1 = 64MB,
____ DATA_DEV_2 = 32MB,
____ DATA_DEV_3 = 64MB,
____ DATA_DEV_4 = 32MB
__ LOG ON
____ LOG_DEV_3 = 16MB


> 4. set up your DATABASES accordingly, ala sp_dboption
>
> 5. either
> __ (D) no recovery (Dev: TRUNC-LOG-ON-CHKPT) or
> __ (A) reasonable recovery (Benchmark) or
> __ (B) minimum-production-recovery (Production)
>
> 6. for (A)(B), set up a threshold at least on the logsegment, to automatically DUMP TRAN

Here is such an sp, to get you started. The default name is sp_thresholdaction. You need to understand the previous post, understand what ASE is doing and why, in order to understand this sp.
__ https://www.softwaregems.com.au/Documents/Tutorial/Sybase/sg_thresholdaction_Public.sql

And then tell ASE about it. Read up on Last Chance Threshold,which you cannot set. Set up a second (non-LCT) threshold somewhat earlier, eg. at 80% full. For Production, I would have a third at 90%. These are pages:
EXEC sp_addthreshold sg_test, logsegment, 2048, sp_thresholdaction
EXEC sp_addthreshold sg_test, logsegment, 4096, sp_thresholdaction

That sp is good for ANY segment, not just the logsegment, it will assist (notices on the errorlog) with your data/NCI/table-partition space planning.

----
Tran Log Size
When you run into problems (such as you have), NEVER extend the LofFile Size. Why ? This is what nebies do, and it is never-ending because it does not deal with the Cause, it only extends the Effect. As evidenced, even 1GB [which is gigantic] is not enough in some circumstances. Figure out the activity (Huge Transaction] and fix that. Use:
__ sp_who
__ sp_lock
__ SELECT * from master..syslogshold
__ <db_name>..sp_spaceused syslogs

For Log Space Used:
> 3> sp_helplog
> 3> sp_helpsegment -- without parms
> 3> sp_helpsegment logsegment
> 3> sp_helpthreshold -- without parms
> 3> sp_helpthreshold logsegment

__ <db_name>..sp_spaceused syslogs

FYI. For DEV databases, I purposely set the log size to 50MB (where the data size could be 10GB to 50GB). So that any idiot who dares to NOT follow the SG OLTP Template and rules, who writes a Transactions that is too large, will crash, and he can sit there and get himself out of his self-created mess, with the other developers haranguing him, until he does. This is to ensure that even when they are writing discardable code, they write it correctly.

Eg. SELECT * is not permitted, it must always have a column list.

> ----
> General
> 1. For admin tasks, most people use a 3rd party DBA Admin tool, such as DBArtisan.
>
> Sybase does have a free DB Admin tool, free with a licence for ASE, a very nice GUI (not as amazing as DBArtisan which I have), but I don’t know if it is included in the dev version (“express edition”). It is called something like “enterprise manager”, look for it.

It is now called Cockpit. SAP has stuffed it up, it is not as good as it used to be (under Sybase).

----
Remember, PigPoopGres does not implement SQL as a Language, it is bits and pieces triggered from here and there. Triggers for this; farts for that; different syntax depending on where you call it from, and half of SQL missing. And anti-SQL set up as “SQL”. Forever changing with each version, meaning re-writing code is “normal”.

Sybase; DB2; MS; Informix, all implement SQL (a) a full-blown language (Parser; Query Optimiser; etc), that is b) consistent, no matter where yo call it from. As far as code is concerned, it is “write and forget”.

For server and db admin that is beyond SQL, which is an awful lot, (c) the System Stored Procs. A separate manual, you have to get to knoåw and love them.
__ https://help.sap.com/doc/a612a55abc2b1014b17ffcb6db323b36/16.0.4.0/en-US/SAP_ASE_Reference_Manual_Procedures_en.pdf

Per Codd’s Twelve Rules/Rule Four, everything re the server and the dbs are in ordinary tables, accessible. System Tables manual.
__ https://help.sap.com/doc/a612d2d0bc2b1014a731f1be020d1d58/16.0.4.0/en-US/SAP_ASE_Reference_Manual_Tables_en.pdf

What used to be a great System Tables Poster (everything squeezed onto a single page), that has been butchered by SAP, which calls this silliness “ERD”.
__ https://wiki.scn.sap.com/wiki/download/attachments/421365216/ASE_16.0_SP02_SYS_TABLE_POSTER.pdf?version=1&modificationDate=1506093461000&api=v2

Get started at the SAP Developer Community forums. SAP is huge, and runs on several SQL platforms, stick to “SAP ASE” to avoid confusion.
__ https://community.sap.com/topics/applications-on-ase

Cheers
Derek
Reply all
Reply to author
Forward
0 new messages