Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

running db2 command in background

3,535 views
Skip to first unread message

annecar...@gmail.com

unread,
Dec 6, 2008, 12:22:09 PM12/6/08
to
I have three scripts called extract_data_1.sql, extract_data_2.sql,
and extract_data_3.sql each of which exports data from a DB2 UDB
database (in UNIX). Since the data exports are long running, I wanted
to call these scripts in Unix shell script in the background (so that
they can be kicked off at the same time). My unix script is here:

#!/bin/bash

# run the db2profile
. /home/db2i21/sqllib/db2profile

db2 -tvf extract_data_1.sql &
db2 -tvf extract_data_2.sql &
db2 -tvf extract_data_3.sql &
exit

DB2 is throwing "DB21018E A system error occurred. The command line
processor could not
continue processing." error.

why is it causing error..? am I missing anything?

Thanks

Ian

unread,
Dec 6, 2008, 2:30:14 PM12/6/08
to

This is expected. The 'db2' process (the "front-end" process)
communicates with the 'db2bp' process (the "back-end" process), and
db2bp does all of the actual work. You can only have a single db2bp
associated with your shell, and db2bp can only communicate with a
single db2 process, which is why you're failing.

To do multiple exports in parallel, you need to have each run
inside a different shell:

#!/bin/ksh

function export {
file=$1

(
. /home/db2i21/sqllib/db2profile
db2 -tvf ${file}
) &
}


export extract_data_1.sql
export extract_data_2.sql
export extract_data_3.sql

wait

print "Extracts done."

annecar...@gmail.com

unread,
Dec 6, 2008, 5:25:47 PM12/6/08
to
Thanks Ian, It worked.

On Dec 6, 2:30 pm, Ian <ianb...@mobileaudio.com> wrote:

> print "Extracts done."- Hide quoted text -
>
> - Show quoted text -

Darin McBride

unread,
Dec 11, 2008, 11:50:19 PM12/11/08
to
Ian wrote:

I wonder if a simpler version would be to just run:

for x in 1 2 3; do
sh -c "db2 -tvf extract_data_$x.sql" &
done

Another alternative that I like to use for long-running, parallelisable
tasks, is to use a makefile. It takes a lot of extra work to set up, but
scales incredibly well. Further, the job server can allow you to do some
really nifty things. e.g., if you had 20 such tasks, but you only wanted
three going at a time, "make -j3" would do that (once the makefile was
written). If, later, you upgraded your network, disk, CPU, whatever, and
decided that you could do more (or maybe 3 was tasking it too hard), you
could simply change the -j option to more (or less) for future runs. All
the hard work is already done for you.

Ok, writing a makefile isn't easy. But it's still orders of magnitude
easier than writing the parallel logic required for this type of
flexibility.

Ian

unread,
Dec 17, 2008, 12:19:52 PM12/17/08
to

You *must* be a developer. ;-)


I once wrote a makefile (and a couple of helper scripts) to build a
database schema (one file for each object (table, index, key, view,
etc), complete with handling for dependencies. This was way back when
ALTER was practically useless (v7.2).

It was really effective, but because I didn't have a way to
automatically generate the makefile dependencies it was too much work
to maintain.


premku...@gmail.com

unread,
Feb 15, 2013, 9:13:54 AM2/15/13
to
Hi am new to db2,while opening command line processor its showing this(db21018e a system error occurred. the command line processor could not continue processing) error
Os : windows vista 32-bit
Db2 Express c

Plz give reply waiting for ur response

Lennart Jonsson

unread,
Feb 15, 2013, 9:26:50 AM2/15/13
to
0 new messages