Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Using DBMS_JOB and DBMS_PIPE to mimic multithreaded PL/SQL application

116 views
Skip to first unread message

Paul Jones

unread,
Nov 17, 2003, 6:47:27 PM11/17/03
to
Hi

I am trying to use the DBMS_PIPE and DBMS_JOB PL/SQL packages (in 8i)
to write a multithreaded Oracle package.
By what I understand, the psuedo-code for my main and child procedures
would be something like:

PROC 1
- Setup DBMS_PIPE package to receive messages
- Spawn other procedure (PROC 2) multiple times using DBMS_JOB
- Recieve messages from spawned procedures using DBMS_PIPE
END PROC 1

PROC 2
- Do processing
- Send message to PROC 1 using DBMS_PIPE
END PROC 2

What I would like is any examples of working code that anyone may have
that can do just this. It does not appear too difficult but it's
easier to work from an example givin that I have not done this before.

Thanks
Paul

Pete Finnigan

unread,
Nov 18, 2003, 7:01:26 AM11/18/03
to
Hi Paul,

Quite an interesting idea. I did something similar a few years ago to
mimic a VMS program on Solaris when we migrated a legacy cognos
application from vms/rdb to solaris/oracle and a newer version of cognos
powerhouse. Instead i used a combination of triggers and pipes rather
than dbms_job. The way we achieved concurrency was to launch child
pl/sql processes from a shell script daemon. The daemon updated itself
from messages passed from the children. The messages were sent from
triggers that fired when the job status queue (table) was updated. I
don't have a copy of the code though.

BTW, why are you trying to do this?

kind regards

Pete
--
Pete Finnigan
email:pe...@petefinnigan.com
Web site: http://www.petefinnigan.com - Oracle security audit specialists
Book:Oracle security step-by-step Guide - see http://store.sans.org for details.

Paul Jones

unread,
Nov 18, 2003, 5:54:28 PM11/18/03
to
>
> BTW, why are you trying to do this?
>

We currently have a batch process that takes 5 hours to process around
5000 records. Not very efficient (CPU not used effectively), but the
processing logic is very complex and we've already made a lot of gain
by fixing the logic and SQL. However, we are performing the same
logic on all 5000 records, so it stands to reason that I can increase
performance if I can process more than one at a time.

The way I will do it is:
- Select 5000 records using complex SQL
- Insert these records into a table with the addition of a batch
number (ie. records 1-100, batch number is 1000; records 101-200,
batch number is 1001; etc)
- Spawn a child thread or job to process one batch number of 100
records.

Daniel Morgan

unread,
Nov 18, 2003, 7:12:09 PM11/18/03
to
Paul Jones wrote:

While acknowledging that I knowing little or nothing of what you are
trying to do the above looks absolutely preposterous.

What will your organization pay into a scholarship fund if one of my
students gets this down to less than one hour? How about less than 30
minutes? How about less than 10 minutes?
--
Daniel Morgan
http://www.outreach.washington.edu/ext/certificates/oad/oad_crs.asp
http://www.outreach.washington.edu/ext/certificates/aoa/aoa_crs.asp
damo...@x.washington.edu
(replace 'x' with a 'u' to reply)

Jan Korecki

unread,
Nov 18, 2003, 8:05:06 PM11/18/03
to
Hi!

I might have missed something about why you want to use DBMS_PIPE but
you could parallel the batch like this:


Start up to 36 parallel job processes (up to 1000 in 9i)

dbms_job.submit( l_job, 'process(p_batchnumber );' );
dbms_job.submit( l_job, 'process(p_batchnumber );' );
dbms_job.submit( l_job, 'process(p_batchnumber );' );
...

loop
if(any available job processes AND any more batches left to be run)
start up x number of new jobs;
elsif (all batches are processed)
exit;
end if,

dbms_lock.sleep(sleep for a minute or so)

end loop;

To check for available batches to run while in the loop:
when the procedure process starts it updates a parameter table that that
batchid is running
when the procedure process is done it updates a parameter table that the
batchid has finished.


Regards,
Janne!

Pete Finnigan

unread,
Nov 19, 2003, 7:30:50 AM11/19/03
to
Hi Paul,

without seeing your complex logic I have to say 5 hours for 5000 records
does not sound very good. I fixed a PL/SQL batch process a few years ago
that processed millions (7-10 million) of financial transactions per
night in 18 hours to process them in approx 2 or less. This process
loaded data using sql*loader and used complex pl/sql (few thousand
lines) and many different processes depending on transaction types to
populate a lot of tables. We did parallel up some of the processing
where possible but also tuned the bad SQL and PL/SQL. The tuning and
sorting out badly laid out tablespaces/datafiles on disk fixed most of
it. The i/o was all over the place and it didn't use RAID.

Maybe you should consider tracing profiling and tuning the SQL and
PL/SQL first?

Paul Jones

unread,
Nov 19, 2003, 7:54:49 PM11/19/03
to
I've worked on many batch systems and this is the slowest I've seen!
Ideally, yes, I would like to fix the PL/SQL. However, the client is
overly sensitive to anyone touching the code due to the number of
functional problems they've had. However, the SQL is definately
something I will be tuning.

I plan on getting the processing down to max. 30 minutes which seems
more than enough for 5000 records!!

Pete Finnigan <pl...@petefinnigan.com> wrote in message news:<8I2$qRB6J2u$Qx...@peterfinnigan.demon.co.uk>...

sybr...@yahoo.com

unread,
Nov 20, 2003, 9:17:54 AM11/20/03
to
Pete Finnigan <pl...@petefinnigan.com> wrote in message news:<8I2$qRB6J2u$Qx...@peterfinnigan.demon.co.uk>...
> Hi Paul,
>
> without seeing your complex logic I have to say 5 hours for 5000 records
> does not sound very good. I fixed a PL/SQL batch process a few years ago
> that processed millions (7-10 million) of financial transactions per
> night in 18 hours to process them in approx 2 or less. This process
> loaded data using sql*loader and used complex pl/sql (few thousand
> lines) and many different processes depending on transaction types to
> populate a lot of tables. We did parallel up some of the processing
> where possible but also tuned the bad SQL and PL/SQL. The tuning and
> sorting out badly laid out tablespaces/datafiles on disk fixed most of
> it. The i/o was all over the place and it didn't use RAID.
>
> Maybe you should consider tracing profiling and tuning the SQL and
> PL/SQL first?
>
> kind regards
>
> Pete


Why would anyone solve a problem, when you can 'workaround' it and
make yourself indispensable by creating your own mess?

Sybrand Bakker
Senior Oracle DBA

0 new messages