"Craig Moreno" <moreno...@ni.cnrsw.navy.mil> wrote in message
news:002101c2ffac$c043c5c0$2f01...@phx.gbl...
http://www.aspfaq.com/show.asp?id=2120
http://support.microsoft.com/default.aspx?scid=kb;EN-US;q186133
http://www.15seconds.com/Issue/010308.htm
--
-oj
RAC v2.1 & QALite!
http://www.rac4sql.net
"Craig Moreno" <moreno...@ni.cnrsw.navy.mil> wrote in message
news:002101c2ffac$c043c5c0$2f01...@phx.gbl...
There are several ways of doing paged processing, and one of them
is using cursors. This is also one of the worst solutions, both
in terms of maintenance and performance.
A better way might be to use the .PageSize and .PageCount properties
in ADO, although I don't know exactly what ADO does behind your
back.
If you have a unique key on which you retrieve the value, something
like this is the most effecient.
SELECT TOP 50 * FROM tbl WHERE keycol > @lastkey
If you don't have a key, you can select into some temporary table
with an identity column, and pick from that table. The table would
exist as long as the query is current.
--
Erland Sommarskog, SQL Server MVP, som...@algonet.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp
i've always thought that there is a quite obvious way to do it. it looks
something like this.
1. open a cursor for the pk column of the table ordered by sortcol.
2. position absolutely/relatively to the start of the page (can be fairly
large number).
3. loop page length times (fairly small number) while filling a temp table
with the pk column values.
4. finally, join such created temp table with the original one.
this looks something like this in T-SQL:
USE Northwind
DECLARE @PageStart INT
, @PageLength INT
SELECT @PageStart = 20
, @PageLength = 25
DECLARE @PK CHAR(5)
DECLARE @TmpPK TABLE (
PK CHAR(5) NOT NULL PRIMARY KEY
)
DECLARE PagingCursor CURSOR DYNAMIC READ_ONLY FOR
SELECT CustomerID
FROM Customers
ORDER BY ContactName
OPEN PagingCursor
FETCH RELATIVE @PageStart
FROM PagingCursor
INTO @PK
WHILE @PageLength > 0 AND @@FETCH_STATUS = 0
BEGIN
INSERT @TmpPK(PK)
VALUES (@PK)
FETCH NEXT
FROM PagingCursor
INTO @PK
SET @PageLength = @PageLength - 1
END
CLOSE PagingCursor
DEALLOCATE PagingCursor
SELECT tbl.*
FROM Customers tbl
JOIN @TmpPK pk
ON tbl.CustomerID = pk.PK
ORDER BY ContactName
couple of notes on the T-SQL
- lock type READ_ONLY doubles performace.
- it is *imperative* to have a covering index on (sortcol, pk) -- this is
easily achieved by putting a clustered index on the pk column and a
nonclustered one on the sortcol.
- if there isn't a covering index KEYSET cursors perform better.
- this should be made dynamic for the table, pk col and sortcol names to be
passed as arguments.
- provided there is a covering index i can bet that this will fetch 100
records at any position in 3 mil records in less than 10 seconds (tested on
a lowly server!)
- this performs very well on small tables too.
cheers,
</wqw>
"Craig Moreno" <moreno...@ni.cnrsw.navy.mil> wrote in message
news:002101c2ffac$c043c5c0$2f01...@phx.gbl...
Could you elaborate?
> 1. open a cursor for the pk column
This is going to be better? What if there is no PK?
> - lock type READ_ONLY doubles performace.
Yet you use DYNAMIC. Why do you need this? I usually use FAST_FORWARD,
FORWARD_ONLY, etc.
--
AB
www.aspfaq.com
using temp tables to filter records is in fact emulating a keyset cursor.
you end up with 3 mil records in the temp table.
it is very uncommon to be able to specify starting and ending values of the
sort col thus to be able to filter with BETWEEN in the WHERE clause. 99% of
the cases you simply don't know before hand these values. for instance you
need to paginate messages in a web forum ordered by date -- would be great
to know that page N includes messages with dates between X and Y
2. from BOL:
FORWARD_ONLY - "FETCH NEXT is the only supported fetch option"
FAST_FORWARD - "Specifies a FORWARD_ONLY, READ_ONLY cursor with performance
optimizations enabled. FAST_FORWARD cannot be specified if SCROLL or
FOR_UPDATE is also specified. "
so both types can not perform ABSOLUTE ot RELATIVE fetch.
cheers,
</wqw>
"Aaron Bertrand - MVP" <aa...@TRASHaspfaq.com> wrote in message
news:OIfVcs#$CHA....@TK2MSFTNGP11.phx.gbl...
SQL cursors should be the tool of last resort.
Bob Barrows
"Vlad Vissoultchev" <wqw...@nospam.myrealbox.com> wrote in message
news:ebo#QuCADH...@TK2MSFTNGP10.phx.gbl...
> SQL cursors should be the tool of last resort.
based on what?
i'll probably have to perform some tests in-hous and compare the DYNAMIC
SCROLL cursor method against the other methods that are circulating the net.
maybe next week (off for the weekend)
cheers,
</wqw>
"Bob Barrows" <reb_...@yahoo.com> wrote in message
news:#2#ZhXDAD...@TK2MSFTNGP10.phx.gbl...
Interesting - my application does not choke, and the table has 5 million
records. My users report report very quick response. I use the forwardonly
oledb cursor/getrows method. Of course, I take pains to limit the number of
records that need to be processed with each page move.
>
> > SQL cursors should be the tool of last resort.
>
> based on what?
I'm not sure what answer you expect, but:
Based on the advice given by every expert who posts to this group, and every
other SQL Server group I've been involved with.
Based on the advice found in books written by the people who actually
developed the software behind SQL Server, such as Ron Soukup (the "Inside
SQL Server ..." series), who says: "Ordinarily, when approaching a problem,
cursors should be near the bottom of your list of possible solutions, not at
the top."
If that's not good enough, then I don't know what is.
Cursors have their place, but it is very easy to misuse them if you are too
quick to resort to using them.
Bob
> Based on the advice given by every expert who posts to this group, and every
> other SQL Server group I've been involved with ...
Oh, resorting to logic and reason now are we ... well, in that case, there's just no arguing with you :-)
BPM
"Bob Barrows" <reb_...@yahoo.com> wrote in message news:uTC6nQEA...@TK2MSFTNGP10.phx.gbl...
I assume you've tried this? I use the method I posted in a multi-user
application with very large tables. Funny, nobody has complained about and
"choking" going on...