db2 stored procedure - passing parameters issue
static sql within the procedure
where char_account_no in ( :ws-acct-list )
if pass 123457 it returns the correct result for the single account
if pass '1234567' it returns nothing
if pass '1234567','2345678' it returns nothing.
Anyone have any idea of the syntax to pass multiple accounts #'s.
something like '1234567','2345678','3456789'
I am assuming this is a numeric field such as INT, SMALLINT, BIGINT, etc
The Standard SQL syntax for that would be
where char_account_no in (1234567,2345678,3456789), but you are trying pass
them as one parm defined as char or varchar.
Correction, I see from the column name (char_account_no) that it is a char
column, however, the same problem applies. With static SQL you cannot put
multiple columns into one variable, even if you include the commas and
quotes. It would work if you used dynamic SQL.
Hi Mark,
sorry if I didn't make it clear it's a CHARACTER field -
'char_account_no'
in db2 developer workbench (jdbc driver), if I enter a 7 character
value without quotes - it finds the data. Putting quotes around it
(either single or double) causes no data to be found.
Calling it from Dot Net using the odbc / cli driver gives the
same result.
If I execute the sql directly via SPUFI, using the format;
where char_account_no in ( '1234567' , '2345678' , '3456789' ) -
works fine
- so I presume it must be something to do with a host variable
being expected to be a single value....
- sounds like I have to convert it to dynamic sql and do prepare /
execute etc
The account list can be several thousand long.... so I don't want
to code
where char_account_no in
( :act01, :act02, :act03 etc.... )
Thanks
Bill
Yes, I think you need to do dynamic SQL.
Oops, hit the send button by mistake :-). Don't know if it applies to
you platform, but if it exists you might want to have a look at global
temporary tables. In short you insert all account_no's in such a table
and then use a join against it in the procedure, just a thought.
/Lennart
Parameters are scalar and only one data type in SQL, not lists,
arrays, pointer chains or other fancy things. That is why 123457
returns the correct result for the single account. The string
'1234567' it returns nothing because it has bad quote marks. The
string '1234567','2345678' it returns nothing it has bad quote marks
*and* makes no sense in SQL.
>> Anyone have any idea of the syntax to pass multiple accounts #'s. <<
One really bad way is to write a parser in one query that dissembles
the string into a one column table expression. I know it is a bad way
-- I wrote one of the first versions of it in a newsgroup and put it
in one of my books many years ago!
Slightly better, you can use dynamic SQL, which is very costly,
It is better to load a table with all the constraints you need on the
account numbers -- like a validating the check digits, range, removing
redundant duplicates, joining them to other data, etc.
>> The account list can be several thousand long.... so I don't want to code WHERE account_nbr IN (:act01, :act02, :act03 etc.... ) <<
Actually, don't knock it. Would you flinch at a 1000 element array in
a procedural language? DB2 can handle up to 32K parameters and do all
the argument validation for you. A text editor can write an insanely
long parameter list for you instantly.
My rule of thumb is to stop at 100 parameters and use the working
table. No real reason for that magic number.
I have an sudoku solver written by Richard Romley in ONE query that
takes 81 parameters -- Flies like a bat out of hell in SQL Server.
Here is an example...
WITH
find_separater( p , n ) AS (
SELECT 1 , 0 FROM sysibm.sysdummy1
UNION ALL
SELECT COALESCE( NULLIF( LOCATE( ','
, ws_acct_list
, p )
, 0 )
, LENGTH(ws_acct_list) + 1
) + 1
, n + 1
FROM find_separater
WHERE n < 100000
AND p <> LENGTH(ws_acct_list) + 2
)
SELECT ...
FROM ...
WHERE char_account_no IN
(SELECT SUBSTR( ws_acct_list
, p1
, p2 - p1 - 1 )
FROM find_separater s1( p1 , n1 )
, find_separater s2( p2 , n2 )
WHERE n2 = n1 + 1
AND p2 > p1 + 1
)
Use of the table UDF.
SELECT ...
FROM ...
WHERE char_account_no IN
(SELECT item
FROM TABLE
( get_item( :ws-acct-list ) ) AS t
)
;
CREATE FUNCTION get_item
( item_list VARCHAR(32000)
, separator CHAR(1)
)
RETURNS TABLE( item VARCHAR(254) )
READS SQL DATA
DETERMINISTIC
NO EXTERNAL ACTION
RETURN
WITH
find_separater( p , n ) AS (
VALUES ( 1 , 0 )
UNION ALL
SELECT COALESCE( NULLIF( LOCATE( separator
, item_list
, p )
, 0 )
, LENGTH(item_list) + 1
) + 1
, n + 1
FROM find_separater
WHERE n < 100000
AND p <> LENGTH(item_list) + 2
)
SELECT SUBSTR( item_list , p1 , p2 - p1 - 1 )
FROM find_separater fs1( p1 , n1 )
, find_separater fs2( p2 , n2 )
WHERE n2 = n1 + 1
AND p2 > p1 + 1
;
CREATE FUNCTION get_item
( item_list VARCHAR(32000)
)
RETURNS TABLE( item VARCHAR(254) )
READS SQL DATA
DETERMINISTIC
NO EXTERNAL ACTION
RETURN
SELECT item
FROM TABLE( get_item( item_list , ',' ) )
;
1) The dangerous, slow kludge is to use dynamic SQL and admit that any
random future user is a better programmer than you are. It is used by
Newbies who do not understand SQL or even what a compiled language
is. A string is a string; it is a scalar value like any other
parameter; it is not code. Again, this is not just an SQL problem;
this is a basic misunderstanding of programming principles.
2) Passing a list of parameters to a stored procedure can be done by
putting them into a string with a separator. I like to use the
traditional comma. Let's assume that you have a whole table full of
such parameter lists:
CREATE TABLE InputStrings
(keycol CHAR(10) NOT NULL PRIMARY KEY,
input_string VARCHAR(255) NOT NULL);
INSERT INTO InputStrings VALUES ('first', '12,34,567,896');
INSERT INTO InputStrings VALUES ('second', '312,534,997,896');
etc.
This will be the table that gets the outputs, in the form of the
original key column and one parameter per row.
It makes life easier if the lists in the input strings start and end
with a comma. You will need a table of sequential numbers -- a
standard SQL programming trick, Now, the query,
CREATE VIEW ParmList (keycol, place, parm)
AS
SELECT keycol,
COUNT(S2.seq), -- reverse order
CAST (SUBSTRING (I1.input_string
FROM S1.seq
FOR MIN(S2.seq) - S1.seq -1)
AS INTEGER)
FROM InputStrings AS I1, Sequence AS S1, Sequence AS S2
WHERE SUBSTRING (',' + I1.input_string + ',', S1.seq, 1) = ','
AND SUBSTRING (',' + I1.input_string + ',', S2.seq, 1) = ','
AND S1.seq < S2.seq
GROUP BY I1.keycol, I1.input_string, S1.seq;
The S1 and S2 copies of Sequence are used to locate bracketing pairs
of commas, and the entire set of substrings located between them is
extracted and cast as integers in one non-procedural step. The trick
is to be sure that the right hand comma of the bracketing pair is the
closest one to the first comma. The relative position of each element
in the list is given by the value of "place", but it does a count down
so you can plan horizontal placement in columns.
This might be faster now:
WITH Commas(keycol, comma_seq, comma_place)
AS
(SELECT I1.keycol, S1.seq,
ROW_NUMBER() OVER (PARTITION BY I1.keycol ORDER BY S1.seq)
FROM InputStrings AS I1, Sequence AS S1
WHERE SUBSTRING (',' || I1.input_string || ','
FROM S1.seq
FOR 1) = ','
AND S1.seq <= CHARLENGTH (I1.input_string))
SELECT SUBSTRING(',' || I1.input_string || ','
FROM C1.comma_place +1
FOR C2.comma_place - C1.comma_place - 1)
FROM Commas AS C1, Commas AS C2
WHERE C2.comma_seq = C1.comma_seq + 1
AND C1.keycol = C2.keycol;
The idea is to get all the positions of the commas in the CTE and then
use (n, n+1) pairs of positions to locate substrings. The hope is that
the ROW_NUMBER() is faster than the GROUP BY in the first attempt.
Since it is materialized before the body of the query (in theory),
there are opportunities for parallelism indexing and other things to
speed up the works.
Hey, I can write kludges with the best of them, but I don't. You need
to at the very least write a routine to clean out blanks, handle
double commas and non-numerics in the strings, take care of floating
point and decimal notation, etc. Basically, you must write part of a
compiler in SQL. Yeeeech! Or decide that you do not want to have
data integrity, which is what most Newbies do in practice altho they
do not know it.
A procedural loop is even worse. You have no error checking, no
ability to pass local variables or expressions, etc.
CREATE PROCEDURE HomemadeParser(@input_string VARCHAR(8000))
AS
BEGIN
DECLARE @comma_position INTEGER;
CREATE TABLE #Slices
(slice_value INTEGER);
SET @input_string = @input_string + ','; --add sentinel comma
SET @comma_position = CHARINDEX(',', @input_string);
WHILE @comma_position > 1
BEGIN
INSERT INTO #Slices (slice_value)
VALUES(CAST(LEFT(@input_string, (@comma_position - 1)) AS
INTEGER));
SET @input_string = RIGHT(@input_string, LEN(@input_string)-
@comma_position)
SET @comma_position = CHARINDEX(',', @input_string)
END;
END;
Better answer:
Do this with a long parameter list. You can pass up to 2000+
parameters in T-SQL, which is more than you probably will ever need.
The compiler will do all that error checking that the query version
and the procedural code simply do not have unless you write a full
parser with the standard error codes. You can now pass local
variables to your procedure; you can pass other data types and get
automatic conversions, etc. In short, this is just good software
engineering.
Here is another version with a CTE
CREATE PROCEDURE GetEmployeeList(@emp_list VARCHAR(200))
AS
WITH RECURSIVE EmpCte (emp_id, emp_left, depth)
AS (
SELECT @emp_list, @emp_list, 0
UNION ALL
SELECT LEFT(emp_left, POSITION(',', emp_left) - 1 ),
RIGHT (emp_left, CHAR_LENGTH (emp_left) - POSITION(',', emp_left)),
depth + 1
FROM EmpCte
WHERE emp_left LIKE '%,%'
UNION ALL
SELECT emp_left, NULL, depth + 1
FROM EmpCte
WHERE emp_left NOT LIKE '%,%')
SELECT Personnel.emp_name
FROM Personnel AS E, EmpCte AS C
WHERE E.emp_id = C.emp_id;