I am expecting it scan only the src table.
Any idea?
Even I create another table like Target table using the same partition
key, merge will do 2 table scan.
Hello Anwei,
Could you provide some DDL that would allow us to recreate your
situation? There are many factors involved in the optimizer's decision
to use a table scan. The information you're giving is too vague to
comment on.
--
Frederik Engelen
--
Serge Rielau
SQL Architect DB2 for LUW, IBM Toronto Lab
Blog: tinyurl.com/SQLTips4DB2
Wiki: tinyurl.com/Oracle2DB2Wiki
Twitter: srielau
"The information you're giving is too vague to comment on."
Ok, yes, well, euhm, Serge can, of course...
--
Frederik
Any chance of setting optimization level at the statement level (sort of
like can be done with isolation level)?
db2 set current query optimization 3
<run your statement here>
db2 set current query optimization <old value>
--
Helmut K. C. Tessarek
DB2 Performance and Development
IBM Toronto Lab
You can't set the optimization level at statement level (within the SQL
statement), but you can use the following special register:
set current query optimization = 3
<run your statement here>
set current query optimization = <old value>
Yes, I already know that. My question is whether there is any chance of IBM
making an enhancement to DB2 to have it set at the statement level.
Changing the current optimization is unacceptable in most cases.
http://publib.boulder.ibm.com/infocenter/db2luw/v9/topic/com.ibm.db2.udb.admin.doc/doc/c0024649.htm
Cheers
Serge
PS: No this is not the complete answer, but knowing you you should be
able to connect the dots...
)
DISTRIBUTE BY HASH(DRN,
SOURCE_ID)
IN TSM003 INDEX IN IXM003 ;
ALTER TABLE FACT VOLATILE CARDINALITY;
-- DDL Statements for primary key on Table FACT
ALTER TABLE FACT
ADD PRIMARY KEY
(DRN,
SOURCE_ID);
CREATE INDEX IDX01_DOC_HDR
ON DOC_HDR ( ID );
From the plan show MERGE UPDATE is scanning the Target table, but
MERGE INSERT is using Primary key.
DB2 v9.1.0.7
Section Code Page = 1208
Estimated Cost = 64481.765625
Estimated Cardinality = 5745.120117
Coordinator Subsection - Main Processing:
Distribute Subsection #5
| Broadcast to Node List
| | Nodes = 1, 2, 3, 4, 5, 6, 7, 8
Distribute Subsection #4
| Broadcast to Node List
| | Nodes = 1, 2, 3, 4, 5, 6, 7, 8
Distribute Subsection #3
| Broadcast to Node List
| | Nodes = 1, 2, 3, 4, 5, 6, 7, 8
Distribute Subsection #2
| Broadcast to Node List
| | Nodes = 1, 2, 3, 4, 5, 6, 7, 8
Distribute Subsection #1
| Broadcast to Node List
| | Nodes = 1, 2, 3, 4, 5, 6, 7, 8
Subsection #1:
Data Stream 1:
| Not Piped
| Access Table Queue ID = q1 #Columns = 331
| Insert Into Temp Table ID = t1
| | #Columns = 331
End of Data Stream 1
Access Temp Table ID = t1
| #Columns = 331
| Relation Scan
| | Prefetch: Eligible
Insert: Table Name = FACT ID = 14,5
| Insert Predicate(s)
| | #Predicates = 2
Subsection #2:
Data Stream 2:
| Not Piped
| Access Table Queue ID = q2 #Columns = 333
| Establish Row Position
| | Access Table Name = FACT ID = 14,5
| Update: Table Name = FACT ID = 14,5
| | Update Predicate(s)
| | | #Predicates = 2
| Delete: Table Name = FACT ID = 14,5
| | Delete Predicate(s)
| | | #Predicates = 2
| Insert Into Temp Table ID = t2
| | #Columns = 331
End of Data Stream 2
Access Temp Table ID = t2
| #Columns = 331
| Relation Scan
| | Prefetch: Eligible
| Sargable Predicate(s)
| | Insert Into Asynchronous Table Queue ID = q1
| | | Hash to Specific Node
| | | Rows Can Overflow to Temporary Tables
Insert Into Asynchronous Table Queue Completion ID = q1
Subsection #3:
Access Table Queue ID = q3 #Columns = 327
| Output Sorted
| | #Key Columns = 1
| | | Key 1: (Ascending)
Residual Predicate(s)
| #Predicates = 4
Residual Predicate(s)
| #Predicates = 1
Insert Into Asynchronous Table Queue ID = q2
| Send to Specific Node
| Rows Can Overflow to Temporary Tables
Subsection #4:
Access Table Name = FACT ID = 14,5
| #Columns = 328
| Compressed Table
| Relation Scan
| | Prefetch: Eligible
Isolation Level: Read Stability
| Lock Intents
| | Table: Intent Exclusive
| | Row : Exclusive
| Sargable Predicate(s)
| | Process Build Table for Hash Join
Left Outer Hash Join
| Early Out: Single Match Per Outer Row
| Estimated Build Size: 224000
| Estimated Probe Size: 4000000
| Access Table Queue ID = q4 #Columns = 322
Insert Into Sorted Temp Table ID = t3
| #Columns = 327
| #Sort Key Columns = 1
| | Key 1: (Ascending)
| Sortheap Allocation Parameters:
| | #Rows = 1995.000000
| | Row Width = 2448
| Piped
Access Temp Table ID = t3
| #Columns = 327
| Relation Scan
| | Prefetch: Eligible
| Sargable Predicate(s)
| | Insert Into Asynchronous Table Queue ID = q3
| | | Hash to Specific Node
| | | Rows Can Overflow to Temporary Tables
Insert Into Asynchronous Table Queue Completion ID = q3
Subsection #5:
Access Table Name = STAGING. ID = 14,6
| #Columns = 323
| Volatile Cardinality
| Relation Scan
| | Prefetch: Eligible
| Lock Intents
| | Table: Intent Share
| | Row : Next Key Share
| Sargable Predicate(s)
| | #Predicates = 1
Insert Into Asynchronous Table Queue ID = q4
| Hash to Specific Node
| Rows Can Overflow to Temporary Tables
End of section
Optimizer Plan:
INSERT
( 2)
/ \
TBSCAN Table:
( 3) FACT
|
TEMP
( 4)
|
DTQ
( 5)
|
TBSCAN
( 6)
|
TEMP
( 7)
|
DELETE
( 8)
/--/ \
UPDATE Table:
( 9) FACT
/ \
FETCH Table:
( 10) FACT
/ \
DTQ Table:
( 11) FACT
|
FILTER
( 12)
|
FILTER
( 13)
|
MDTQ
( 14)
|
TBSCAN
( 15)
|
SORT
( 16)
|
HSJOIN
( 17)
/ \
DTQ TBSCAN
( 18) ( 20)
| |
TBSCAN Table:
( 19) FACT
|
Table:
STAGING