Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Performance hit - parameters versus variables

112 views
Skip to first unread message

davejg

unread,
Nov 19, 2001, 7:31:42 PM11/19/01
to
Why would it be that a SELECT in an SP would run twice as slow if I
used parameter variables instead of "regular" variables?

I was having an issue where a query was running faster in batch than
in a stored procedure. It ends up it was because the stored procedure
was using parameters in a query, and the batch wasn't.

Or, if you want examples,

(Note: TransactionSummary is a huge table with indexes on MerchantKey
and ProcessDate)

Why is this:

Alter Procedure rptFTXRYTDPortProcSum ( @ParmToday DATETIME = NULL,
@ParmBegDate DATETIME = NULL )

AS

SET NOCOUNT ON

DECLARE @Today AS DATETIME, @BegDate AS DATETIME

IF @Today IS NULL
BEGIN
SELECT @Today = SysInfo.ACHEndDate
FROM SysInfo
WHERE SysInfo.SysInfoKey = 1
END
ELSE
BEGIN
SET @Today = @ParmToday
END

IF @BegDate IS NULL
BEGIN
SELECT @BegDate = DateAdd(DAY, (DatePart(DayOfYear,
@Today) * -1) +1, @Today)
END
ELSE
BEGIN
SET @BegDate = @ParmBegDate
END

SELECT BOM = DateAdd(DAY, (DatePart(Day, ProcessDate) * -1) +1,
ProcessDate),
TransactionSummary.MerchantKey,
SUM(TransactionSummary.VisaSalesAmt) AS SumVisaSalesAmt,
SUM(TransactionSummary.VisaSalesCnt) AS SumVisaSalesCnt,
SUM(TransactionSummary.McSalesAmt) AS SumMcSalesAmt,
SUM(TransactionSummary.McSalesCnt) AS SumMcSalesCnt,
SUM(TransactionSummary.McNewRetrievalsCnt) AS
SumMcNewRetrievalsCnt
INTO #TmpMerchTots
FROM TransactionSummary
WHERE TransactionSummary.ProcessDate >= @BegDate AND
TransactionSummary.ProcessDate <= @Today
GROUP BY DateAdd(DAY, (DatePart(Day, ProcessDate) * -1) +1,
ProcessDate),
MerchantKey
ORDER BY 1,2


Much faster than:

Alter Procedure rptFTXRYTDPortProcSum ( @Today DATETIME = NULL,
@BegDate DATETIME = NULL )

AS

SET NOCOUNT ON

IF @Today IS NULL
BEGIN
SELECT @Today = SysInfo.ACHEndDate
FROM SysInfo
WHERE SysInfo.SysInfoKey = 1
END

IF @BegDate IS NULL
BEGIN
SELECT @BegDate = DateAdd(DAY, (DatePart(DayOfYear,
@Today) * -1) +1, @Today)
END

SELECT BOM = DateAdd(DAY, (DatePart(Day, ProcessDate) * -1) +1,
ProcessDate),
TransactionSummary.MerchantKey,
SUM(TransactionSummary.VisaSalesAmt) AS SumVisaSalesAmt,
SUM(TransactionSummary.VisaSalesCnt) AS SumVisaSalesCnt,
SUM(TransactionSummary.McSalesAmt) AS SumMcSalesAmt,
SUM(TransactionSummary.McSalesCnt) AS SumMcSalesCnt,
SUM(TransactionSummary.McNewRetrievalsCnt) AS
SumMcNewRetrievalsCnt
INTO #TmpMerchTots
FROM TransactionSummary
WHERE TransactionSummary.ProcessDate >= @BegDate AND
TransactionSummary.ProcessDate <= @Today
GROUP BY DateAdd(DAY, (DatePart(Day, ProcessDate) * -1) +1,
ProcessDate),
MerchantKey
ORDER BY 1,2

Andrew J. Kelly

unread,
Nov 20, 2001, 8:00:08 AM11/20/01
to
A couple reasons come to mind. First without a parameter (or variable) the
sp is more likely to reuse a cached plan than if it did have them. The
recompiling could be costly at times. The other reason would be that when
you use a variable sql server has to guess what the correct plan would be
for any given value. For instance lets say you ran a sp for the first time
and used a value that would be best served by using a table scan and the
plan was cached. Then when you re-run the sp and use a value that normally
utilizes an index it will still use the old query plan and scan instead of
using the index. This is a simplification but should show how the more
variables you have that will determine an optimal query plan the harder it
is to reuse a plan and if so the possibility of wrong plans for any given
value.

--
Andrew J. Kelly, SQL Server MVP
TargitInteractive


"davejg" <davi...@yahoo.com> wrote in message
news:77c6045d.01111...@posting.google.com...

Bart Duncan [MS]

unread,
Nov 20, 2001, 11:56:13 AM11/20/01
to
David -

The reason for the performance difference stems from a feature called
"parameter sniffing". Consider a stored proc defined as follows:

CREATE PROC proc1 @p1 int AS
SELECT * FROM table1 WHERE c1 = @p1
GO

Keep in mind that the server has to compile a complete execution plan for
the proc before the proc begins to execute. In 6.5, at compile time SQL
didn't know what the value of @p1 was, so it had to make a lot of guesses
when compiling a plan. Suppose all of the actual parameter values for
"@p1 int" that a user ever passed into this stored proc were unique
integers that were greater than 0, but suppose 40% of the [c1] values in
[table1] were, in fact, 0. SQL would use the average density of the
column to estimate the number of rows that this predicate would return;
this would be an overestimate, and SQL would might choose a table scan
over an index seek based on the rowcount estimates. A table scan would
be the best plan if the parameter value was 0, but unfortunately it
happens that users will never or rarely pass @p1=0, so performance of the
stored proc for more typical parameters suffers.

In SQL 7.0 or 2000, suppose you executed this proc for the first time
(when the sp plan is not in cache) with the command "EXEC proc1 @p1 =
10". Parameter sniffing allows SQL to insert the known value of
parameter @p1 into the query at compile time before a plan for the query
is generated. Because SQL knows that the value of @p1 is not 0, it can
compile a plan that is tailored to the class of parameters that is
actually passed into the proc, so for example it might select an index
seek instead of a table scan based on the smaller estimated rowcount --
this is a good thing if most of the time 0 is not the value passed as
@p1. Generally speaking, this feature allows more efficient stored proc
execution plans, but a key requirement for everything to work as expected
is that the parameter values used for compilation be "typical".

In your case, the problem is that you have default NULL values for your
parameters ("@Today DATETIME = NULL, ...") that are not typical because
the parameter values are changed inside the stored proc before they are
used -- as a result NULL will never actually be used to search the
column. If the first execution of this stored proc doesn't pass in an
explicit value for the @Today parameter, SQL believes that its value will
be NULL. When SQL compiles the plan for this sp it substitutes NULL for
each occurrence of @Today that is embedded within a query.
Unfortunately, after execution begins the first thing the stored proc
does is change @Today to a non-NULL value if it is found to be NULL, but
unfortunately SQL doesn't know about this at compile time. Because NULL
is a very atypical parameter value, the plan that SQL generates may not
be a good one for the new value of the parameter that is assigned at
execution time.

So, the bottom line is that if you assign defaults to your sp parameters
and later use those same parameters in a query, the defaults should be
"typical" because they will be used during plan generation. If you must
use defaults and business logic dictates that they be atypical (as may be
the case here if app modifications are not an option), there are two
possible solutions if you determine that the substitution of atypical
parameter values is causing bad plans:

1. "Disable" parameter sniffing by using local DECLARE'd variables that
you SET equal to the parameters inside the stored proc, and use the local
variables instead of the offending parameters in the queries. This is the
solution that you found yourself. SQL can't use parameter sniffing in
this case so it must make some guesses, but in this case the guess based
on average column density is better than the plan based on a specific but
"wrong" parameter value (NULL).

2. Nest the affected queries somehow so that they run within a different
context that will require a distinct execution plan. There are several
possibilities here. for example:
a. Put the affected queries in a different "child" stored proc. If
you execute that stored proc within this one *after* the parameter @Today
has been changed to its final value, parameter sniffing will suddenly
become your friend because the value SQL uses to compile the queries
inside the child stored proc is the actual value that will be used in the
query.
b. Use sp_executesql to execute the affected queries. The plan won't
be generated until the sp_executesql stmt actually runs, which is of
course after the parameter values have been changed.
c. Use dynamic SQL ("EXEC (@sql)") to execute the affected queries.
An equivalent approach would be to put the query in a child stored proc
just like 2.a, but execute it within the parent proc with EXEC WITH
RECOMPILE.

Option #1 seems to have worked well for you in this case, although
sometimes one of the options in #2 is a preferable choice. Here are some
guidelines, although when you're dealing with something as complicated as
the query optimizer experimentation is often the best approach <g>:

- If you have only one "class" (defined as values that have similar
density in the table) of actual parameter value that is used within a
query (even if there are other classes of data in the base table that are
never or rarely searched on), 2.a. or 2.b is probably the best option.
This is because these options permit the actual parameter values to be
used during compilation which should result in the most efficient query
plan for that class of parameter.
- If you have multiple "classes" of parameter value (for example, for
the column being searched, half the table data is NULL, the other half
are unique integers, and you may do searches on either class), 2.c can be
effective. The downside is that a new plan for the query must be
compiled on each execution, but the upside is that the plan will always
be tailored to the parameter value being used for that particular
execution. This is best when there is no single execution plan that
provides acceptable execution time for all classes of parameters.

HTH -
Bart
------------
Bart Duncan
Microsoft SQL Server Support

Please reply to the newsgroup only - thanks.

This posting is provided "AS IS" with no warranties, and confers no
rights.

--------------------
From: davi...@yahoo.com (davejg)
Newsgroups: microsoft.public.sqlserver.programming
Subject: Performance hit - parameters versus variables
Date: 19 Nov 2001 16:31:42 -0800
Organization: http://groups.google.com/
Lines: 102
Message-ID: <77c6045d.01111...@posting.google.com>
NNTP-Posting-Host: 66.51.193.56
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit
X-Trace: posting.google.com 1006216302 31346 127.0.0.1 (20 Nov 2001
00:31:42 GMT)
X-Complaints-To: groups...@google.com
NNTP-Posting-Date: 20 Nov 2001 00:31:42 GMT
Path:
cppssbbsa01.microsoft.com!tkmsftngp01!newsfeed00.sul.t-online.de!t-online.
de!skynet.be!skynet.be!isdnet!sn-xit-02!supernews.com!postnews1.google.com
!not-for-mail
Xref: cppssbbsa01.microsoft.com
microsoft.public.sqlserver.programming:211290
X-Tomcat-NG: microsoft.public.sqlserver.programming

David Gullett

unread,
Nov 20, 2001, 12:39:07 PM11/20/01
to
Thank you so much, Bart, that was exactly what I was trying to find out!

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!

oj

unread,
Nov 20, 2001, 2:03:21 PM11/20/01
to
great article!

-oj

"Bart Duncan [MS]" <bartdunc...@microsoft.com> wrote in message
news:uBWmjRe...@cppssbbsa01.microsoft.com...

0 new messages