I've recently became aware of nesting stored procedures in SQL Server. In my
public facing application, every page makes 2-4 database calls to retrieve
different pieces of the information. I know that database calls are expensive.
With that said, does it make sense to create a single stored procedure that
executes multiple stored procedures so that I can make a single database call
-- and call that one stored procedure? Would this make sense? Would this help
improve performance?
NOTE: If I don't do this, I'd be calling those stored procedures that will
be nested in that single SP, one by one hence making multiple database calls.
--
Thanks,
Sam
//every page makes 2-4 database calls to retrieve
different pieces of the information. I know that database calls are
expensive.//
That is less than desired.
Your optimal goal is "one page, one db hit"
So there is nothing wrong with nesting......I would lean toward Select
statements........
If you do other items other than Select, just be careful.
Here is a typical but simple example that is acceptable, and better than
calling each (sub) usp seperately.
But be careful, because what will happen is that you'll start putting a
bunch of "flag" parameters in the sub usp's and then it gets confusing.
IF EXISTS (SELECT * FROM sys.objects WHERE object_id =
OBJECT_ID(N'[dbo].[uspOrganizationGetAll]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[uspOrganizationGetAll]
GO
CREATE PROCEDURE [dbo].[uspOrganizationGetAll]
AS
BEGIN
SET NOCOUNT ON
exec [dbo].[uspDepartmentGetAll]
exec [dbo].[uspEmployeeGetAll]
exec [dbo].[uspJobTitlesGetAll]
SET NOCOUNT OFF
GO
GRANT EXECUTE ON dbo.uspOrganizationGetAll TO $(DBUSERNAME)
GO
"Sam" <S...@discussions.microsoft.com> wrote in message
news:742C3C1C-4E9D-40C9...@microsoft.com...
Thanks for your response. I agree that it's not desirable and glad to hear
that it's OK to nest SPs into one.
I like the idea of calling individual SPs from one because it gives me the
ability to maintain each stored procedure individually. So if I make some
modifications in the stored procedure, this only affects that single SP.
I assume, by nesting SPs into one, I'd be returning multiple recordsets in
one stored procedure.
--
Thanks,
Sam
Reducing client or middle-tier round trips to the database is a good idea in
that it will reduce network traffic. Your milage in improving performance
will depend on how close the middle-tier is to the database. In SQL Server
2000 (and earlier versions), because recompiles of query plans are done on a
per-procedure basis, it's helpful some execute one SQL statement per
procedure as well. In SQL Server 2005-2008 query plans are recompiled on a
per-statement basis (not per-procedure) and statement-per-procedure does not
assist in reducing recompiles.
If you're going to combine many calls into a single call, you'll need to add
robust error handling to the "driver" procedure so as not to call procedures
2,3, and 4 if procedure 1 fails, etc. You'd need to replicate your existing
client/middle-tier code's concept of success/failure results quit the batch
of procedures on a failure and the return the results/return code that the
client/middle-tier sees in the original. This will likely mean changes to
the data access code as well.
Hope this helps,
Bob Beauchemin
SQLskills
"Sam" <S...@discussions.microsoft.com> wrote in message
news:742C3C1C-4E9D-40C9...@microsoft.com...
Thanks for your response.
In my case, calling the individual stored procedures will not depend on
certain conditions being met. I have to call all of them so maybe the
"nested" is not the right way to phrase it. Maybe I should say, calling other
procedures from one.
An example may explain this better. The application is an ASP.NET
application that utilizes lots of user controls which are great for
encapsulating functions but each user control calls the database so when you
look at the bigger picture, each page call which contains multiple user
controls generates multiple database calls.
A good example is project info. In the project info page, I may have 3 user
controls:
User Control 1 provides project team info by calling stored procedure 1.
User Control 2 provides project history info by calling stored procedure 2.
and so on.
I'm trying to reduce the DB calls by "consolidating" stored procedure calls
into 1.
You are right though. This will require me to re-think the way I call the
database at DAL because currently eacy user control acts independently. I'll
have to figure out a way to centralize DB calls and data distribution to
individual user controls.
--
Thanks,
Sam
A strong dataset with multiple tables.
OrganizationDS
Employee(Table)
JobTitle(Table)
Department(Table)
and your user control can be "passed" one or N of them
UserControl1 ->> gets Organization.DS.Employee(Table).
Or
(better yet)
Create strong objects and collections.
Employee
EmployeeCollection : List>Employee
JobTitle
JobTitleCollection : List<JobTitle>
Department
DepartmentCollection : List<Department>
Create a "wrapper" object
OrganizationWrapper
(set/get properties..)
EmployeeCollection TheEmployees //setter and getter for this property
JobTitleCollection TheJobTitles //setter and getter for this property
DepartmentCollection TheDepartments //setter and getter for this
property
Have your one db "hit" populate the OrganizationWrapper object.
Then each control can take 1 (or N) of the sub collections
UserControl1 ->> pass it a DepartmentCollection via the
Organization.DepartmentCollection property
UserControl2 ->> pass it a JobTitleCollection via the
Organization.JobTitleCollection property
UserControl3 ->> pass it an EmployeeCollection via the
Organization.EmployeeCollection property
That's one db hit, but getting what you need into a (just) big enough object
to do what you need.
I used to do (strong) datasets with multiple tables. But not anymore, I
prefer the strong object, ........ strong collections........and wrapper
object.
"Sam" <S...@discussions.microsoft.com> wrote in message
news:A1078B39-DCFA-4601...@microsoft.com...
That's exactly what I need to do. Your response and help are much
appreciated. Thank you very much!
--
Thanks,
Sam
Instead of a wrapper, I have Customer(s) who have (subcollection) Order(s)
which have (subcollection) OrderDetails.
But its very close to what you need.
Note.
The sample has
List<Customer> everywhere.
NOW I prefer
public class CustomerCollection : List<Customer>
{
//yep, thats it! but your future proof if this collection needs a unique
method!
}
"Sam" <S...@discussions.microsoft.com> wrote in message
news:61AD23D1-D24E-45F8...@microsoft.com...
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
<sql...@live.com> wrote in message
news:84e99d2f-ee5b-4512...@x6g2000vbg.googlegroups.com...
> The advice given here resembles the advice Celko gives, absolutely
> useless and wrong. Ignore these idiots.
>
> It's okay for an application to make 2-4 database calls. The design
> your mentioned is very common and each usercontrol would make it's own
> database call. Not a problem. If you're having performance problems,
> look into optimizing your queries because it's probably poorly
> written.
>
> Don't turn your application into a maintainance nightmare by inventing
> a "god stored procedure" that does everything like the idiots above
> say.
But going to ad-hominem argument with your first response to this
thread...........wow, seriously?
http://en.wikipedia.org/wiki/Ad_hominem
//previous quote//The design your mentioned is very common //
http://en.wikipedia.org/wiki/Argumentum_ad_populum
argumentum ad populum
This fallacy is sometimes committed while trying to convince a person that a
widely popular theory is true.
Yeah, that works as well. They call this peer pressure in middle school.
"Everybody else is doing it, you should do it too".
It also inspires things like mob lynchings..........."We know he's guilty,
let justice be served...no trial!.......just bring him out here and hang
him......it is what this mob wants!".
No one suggested a god stored procedure. My example linked 3 closely
associated entities, with 1 wrapper.
Here is a scenario :If you want to edit a single employee....you'll need the
meta data for the employee (lastname, firstname, ssn, etc), a list of
departments and a list of jobtitles.
Do you want to hit the db 3 times to get that information?
Scaling an application is a discipline. These kind of "hit the db as many
times as you'd like" mentality is one of the approaches that cripples good
scaling techniques.
If you want to point out some counter arguments, then fine, list them...and
let the OP make up his own mind.
But right now, all I see is "you're an idiot" comment and a argumentum ad
populum argument.
Here is a list of logical fallacies, you might want to preview it before
your next post.
http://www.don-lindsay-archive.org/skeptic/arguments.html
I will respond to arguments about the code setup. This will be my last post
if it becomes a mud slinging contest.
Again, I will respond to technical arguments/suggestions made that talk
about the OP's original question.
................
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
<sql...@live.com> wrote in message
news:8cc40f59-400c-416a...@t10g2000vbg.googlegroups.com...
> BTW sloan, your example is trash and that is very bad design. You are
> using god like class (wrapper). I recommend a beginners level book on
> software engineering and design patterns.
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
<sql...@live.com> wrote in message
news:be1b91d0-a70a-434e...@o30g2000vbc.googlegroups.com...
In your scenario, because there is only a handful amount of job titles
and departments, you would put the job titles and departments into a
static cache. Therefore, you're only hitting the database for the
employee information. It works better, is more scalable than your
"wrapper" nested god-like stored procedure garbage.
On May 19, 10:24 am, "sloan" <sl...@ipass.net> wrote:
> If you want to bring up some counter arguments, then fine.
> By all means, do so....and help give the OP some more options for his
> decision.
>
> But going to ad-hominem argument with your first response to this
> thread...........wow, seriously?http://en.wikipedia.org/wiki/Ad_hominem
>
> //previous quote//The design your mentioned is very common
> //http://en.wikipedia.org/wiki/Argumentum_ad_populum
> your next post.http://www.don-lindsay-archive.org/skeptic/arguments.html
>
> I will respond to arguments about the code setup. This will be my last
> post
> if it becomes a mud slinging contest.
> Again, I will respond to technical arguments/suggestions made that talk
> about the OP's original question.
>
> ................
>
> <sqlg...@live.com> wrote in message
Microsoft created an entire feature (SqlCacheDependency) because caching is
NOT that simple (the above quoted methodology).
http://msdn.microsoft.com/en-us/library/ms178604(VS.80).aspx
Caching in ASP.NET with the SqlCacheDependency Class
Aka, the cache becomes outdated at times. How do you handle this situation?
//Updates go straight to the data object and persisted by the data layer.//
Can you explain how this happens on a web farm setup? When webserver1
performs the updates, how does webserver2 through webserverN know about it?
Can you explain when the rows inside the RDBMS are updated from a different
source(client) than your data layer object?
......................
>>>>Static caching like this requires mechanisms to handle modifications to
>>>>the
>>>>base data, which can be a non-trivial exercise.
>>>>(From ......... Kevin G. Boles)
The above previous post is a valid observation.
<sql...@live.com> wrote in message
news:2fb5ea9a-7d4b-41dd...@q2g2000vbr.googlegroups.com...
What extra mechanisms would you require???? Do you even know how to
implement caching?
It doesn't matter if he is using .NET data objects (dataset, datatable
etc) or custom entities, they should go through a data layer. The data
layer is where caching should be implemented so there is no extra
mechanisms required.
In the data layer:
Is Data Object Null? Load From DB : Load From Cache
Updates go straight to the data object and persisted by the data
layer.
There is no extra mechanisms.
You should seriously pick up a book on software engineering and design
patterns for dummies.
On May 19, 12:34 pm, "TheSQLGuru" <kgbo...@earthlink.net> wrote:
> Static caching like this requires mechanisms to handle modifications to
> the
> base data, which can be a non-trivial exercise.
>
> --
> Kevin G. Boles
> Indicium Resources, Inc.
> SQL Server MVP
> kgboles a earthlink dt net
>
On a much better note I just hunted up the Block Sender option in Outlook
Express. Congratulations on your being the first and only person I have
ever used that on (and believe me that is saying quite a lot about you)!!
Bye-Bye now . . .
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
<sql...@live.com> wrote in message
news:2fb5ea9a-7d4b-41dd...@q2g2000vbr.googlegroups.com...
What extra mechanisms would you require???? Do you even know how to
implement caching?
It doesn't matter if he is using .NET data objects (dataset, datatable
etc) or custom entities, they should go through a data layer. The data
layer is where caching should be implemented so there is no extra
mechanisms required.
In the data layer:
Is Data Object Null? Load From DB : Load From Cache
Updates go straight to the data object and persisted by the data
layer.
There is no extra mechanisms.
You should seriously pick up a book on software engineering and design
patterns for dummies.
On May 19, 12:34 pm, "TheSQLGuru" <kgbo...@earthlink.net> wrote:
> Static caching like this requires mechanisms to handle modifications to
> the
> base data, which can be a non-trivial exercise.
>
> --
> Kevin G. Boles
> Indicium Resources, Inc.
> SQL Server MVP
> kgboles a earthlink dt net
>
//Can you explain how this happens on a web farm setup? When webserver1
performs the updates, how does webserver2 through webserverN know about it?/
<sql...@live.com> wrote in message
news:bfa5e63e-7448-46a0...@s21g2000vbb.googlegroups.com...
As I said before, the DAL would take care of everything. The DAL would
return an item from the cache and because this item is reference, any
changes made will be global. If the item is not in the cache, the DAL
would load from the DB. So the cache never becomes "outdated", thus
the term static cache. Both JobTitles and Departments are functioning
as a "data lookup" and it makes sense to put them into a static cache.
Also, all app would hopefully go through the same DAL (you do know
that the definition of n-tier is physical tiers). The DAL is a way of
"interfacing" to the DB so all applications should go through that.
For an example, if you wrote stored procedures instead....would a
different application modify data directly or call the defined stored
procedure??????
On May 19, 3:46 pm, "sloan" <sl...@ipass.net> wrote:
> //In the data layer:
> Is Data Object Null? Load From DB : Load From Cache
> Updates go straight to the data object and persisted by the data
> layer.
> There is no extra mechanisms.
> //
>
> Microsoft created an entire feature (SqlCacheDependency) because caching
> is
> NOT that simple (the above quoted
> methodology).http://msdn.microsoft.com/en-us/library/ms178604(VS.80).aspx
> Caching in ASP.NET with the SqlCacheDependency Class
>
> Aka, the cache becomes outdated at times. How do you handle this
> situation?
>
> //Updates go straight to the data object and persisted by the data
> layer.//
> Can you explain how this happens on a web farm setup? When webserver1
> performs the updates, how does webserver2 through webserverN know about
> it?
>
> Can you explain when the rows inside the RDBMS are updated from a
> different
> source(client) than your data layer object?
>
> ......................
>
> >>>>Static caching like this requires mechanisms to handle modifications
> >>>>to
> >>>>the
> >>>>base data, which can be a non-trivial exercise.
> >>>>(From ......... Kevin G. Boles)
>
> The above previous post is a valid observation.
>
In the first case, there is no scale-out and it's difficult to see
what you gain by adding another layer.
In the second case, you may gain scale out, provided that changes are
not very common. Else you will lose more on the replication swings
and you gain on the cache roundabout.
And in either way, you will not pick up changes performed directly
through SQL Server Managment Studio.
--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se
Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx