Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

fetching unique pins...

8 views
Skip to first unread message

Bobus

unread,
Feb 13, 2006, 11:00:07 PM2/13/06
to
Hi,

I have a table which contains a bunch of prepaid PINs. What is the
best way to fetch a unique pin from the table in a high-traffic
environment with lots of concurrent requests?

For example, my PINs table might look like this and contain thousands
of records:

ID PIN ACQUIRED_BY
DATE_ACQUIRED
....
100 1864678198
101 7862517189
102 6356178381
....

10 users request a pin at the same time. What is the easiest/best way
to ensure that the 10 users will get 10 different unacquired pins?

Thanks for any help...

MGFoster

unread,
Feb 14, 2006, 12:49:36 AM2/14/06
to

Place a Primary Key or Unique constraint on the PIN column. When a
duplicate error occurs generate a new PIN & try to save the new user row
again. Repeate until success.
--
MGFoster:::mgf00 <at> earthlink <decimal-point> net
Oakland, CA (USA)

Bobus

unread,
Feb 14, 2006, 2:02:03 AM2/14/06
to
Thanks, however, we do not generate the PINs ourselves. We simply
maintain the inventory of PINs which are given to us from a 3rd party.

Is there a way in SQL to update a single row ala the LIMIT function in
MYSQL? Something like:
update tablename set foo = bar limit 1

MGFoster

unread,
Feb 14, 2006, 3:23:20 AM2/14/06
to
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Unless you're only using the PIN for a one-time operation - somewhere
you are going to save that PIN (in a table). That table is where you'd
put the Primary Key/Unique constraint.

I don't know what the LIMIT function does. If you want to just update
one row you'd indicate which row in the WHERE clause:

UPDATE table_name SET foo = bar WHERE foo_id = 25

foo_id would be a unique value.


--
MGFoster:::mgf00 <at> earthlink <decimal-point> net
Oakland, CA (USA)

-----BEGIN PGP SIGNATURE-----
Version: PGP for Personal Privacy 5.0
Charset: noconv

iQA/AwUBQ/GTdYechKqOuFEgEQLJ/wCgxLHQiPaeDWXwsi5BxBpg6tlKmFoAn0tv
KM3PLa2qdl2KzW3Lp/XFHbiv
=gfzL
-----END PGP SIGNATURE-----

Randy Martin

unread,
Feb 14, 2006, 10:24:34 AM2/14/06
to
Maybe this

select top 1 ID, PIN from pin_table where acquired_by = <not acquired
value> (NOTE: this could be expensive if you use null to signify Not
Acquired, perhaps a non-null value with an index would help).

update pin_table set acquired_by = <acquired value> where ID = <ID from
select>

commit

--or --

set up one table containing the unused pins and one containing the used
pins

then
select top 1 ID, PIN from unused_pin
insert into used_pin values (ID, PIN)
delete from unused_pin where ID = ID

commit

Alexander Kuznetsov

unread,
Feb 14, 2006, 11:01:38 AM2/14/06
to
I successfully used a transactional message queue for a similar
scenario.

Besides, try this:

create table #pins(id int identity, PIN decimal(10));
insert into #pins(PIN)values(1000000000);
insert into #pins(PIN)values(1000000001);
insert into #pins(PIN)values(1000000002);
go
create table #point_to_pins(id int identity)
go
---to get a PIN
insert into #point_to_pins default values
select @@identity

use @@identity to get the PIN, you will not get any collisions ever

Hugo Kornelis

unread,
Feb 14, 2006, 4:20:33 PM2/14/06
to

Hi Bobus,

To get just one row, you can use TOP 1. Add an ORDER BY if you want to
make it determinate; without ORDER BY, you'll get one row, but there's
no way to predict which one.

If you expect high concurrency, you'll have to use the UPDLOCK to make
sure that the row gets locked when you read it, because otherwise a
second transaction might read the same row before the first can update
it to mark it acquired.

If you also don't want to hamper concurrency, add the READPAST locking
hint to allow SQL Server to skip over locked rows instead of waiting
until the lock is lifted. This is great if you need one row but don't
care which row is returned. But if you need to return the "first" row in
the queue, you can't use this (after all, the transaction that has the
lock might fail and rollback; if you had skipped it, you'd be processing
the "second" available instead of the first). In that case, you'll have
to live with waiting for the lock to be released - make sure that the
transaction is as short as possible!!

So to sum it up: to get "one row, just one, don't care which", use:

BEGIN TRANSACTION
SELECT TOP 1
@ID = ID,
@Pin = Pin
FROM PinsTable WITH (UPDLOCK, READPAST)
WHERE Acquired_By IS NULL
-- Add error handling
UPDATE PinsTable
SET Acquired_By = @User,
Date_Acquired = CURRENT_TIMESTAMP
WHERE ID = @ID
-- Add error handling
COMMIT TRANSACTION

And to get "first row in line", use:

BEGIN TRANSACTION
SELECT TOP 1
@ID = ID,
@Pin = Pin
FROM PinsTable WITH (UPDLOCK)
WHERE Acquired_By IS NULL
ORDER BY Fill in the blanks
-- Add error handling
UPDATE PinsTable
SET Acquired_By = @User,
Date_Acquired = CURRENT_TIMESTAMP
WHERE ID = @ID
-- Add error handling
COMMIT TRANSACTION


--
Hugo Kornelis, SQL Server MVP

Erland Sommarskog

unread,
Feb 14, 2006, 6:19:40 PM2/14/06
to
Hugo Kornelis (hu...@perFact.REMOVETHIS.info.INVALID) writes:

Yet a variation is:

SET ROWCOUNT 1
UPDATE PinsTabel
SET @ID = ID,
@Pin = Pin
WHERE Acquired_By IS NULL
SET ROWCOUNT 0

It is essential to have a (clustered) index on Acquired_By.

Which solution that gives best performance it's difficult to tell.
My solution looks shorted, but Hugo's may be more effective.

Note also that if there is a requirement that a PIN must actually
be used, the transaction scope may need have to be longer, so in
case of an error, there can be a rollback. That will not be good
for concurrency, though.
--
Erland Sommarskog, SQL Server MVP, esq...@sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx

Bobus

unread,
Feb 14, 2006, 8:20:21 PM2/14/06
to
Thanks for the responses everyone!

MGFoster: I was asking about the "limit" clause so that I could
implement a solution similar to what Erland recommended. This
guarantees no collisions.

Randy: in your solution, I believe there is a chance that two
concurrent requests will end up grabbing the same pin.

Alexander: clever! It's like an Oracle sequence. But, in our
particular case, we could have a problem of unused pins for
transactions which rollback.

Hugo: that should definintely do the trick.

Erland: yours too! I will try them both out.

Thanks for the help!

Alexander Kuznetsov

unread,
Feb 15, 2006, 9:45:35 AM2/15/06
to
Bobus,

When I was solving a similar problem, I did try out the approaches
suggested by Hugo and Erland. I hate to say that, but I was always
getting a bottleneck because of lock contention on PinsTable. Maybe I
was missing something at that time. I had a requirement to produce
hundreds of PINs per minute at peak times, so I decided to allocate a
batch of PINs at a time, instead of distrributing them one at a time -
that took care of lock contention

Erland Sommarskog

unread,
Feb 15, 2006, 5:42:56 PM2/15/06
to

Bobus said "But, in our particular case, we could have a problem of unused

pins for transactions which rollback."

This would call for a design where you get a start a transaction, get a
pin, use it for whatever purpose, and then commit. But as you say, you
will get contentions on the PINs here, although it's possible that READPAST
hint could help. I did some quick tests, and it seem to work.

The other option as you say is just to grab a PIN or even a bunch of
them. A later job would then examine which PINs that were taken, and
which were never used, and then mark the latter as unused.

Bobus

unread,
Feb 16, 2006, 1:30:17 AM2/16/06
to
Alexander/Erland: thanks for the tips. I will let you know what issues
we run into, if any.

Best wishes.

Alexander Kuznetsov

unread,
Feb 18, 2006, 5:09:42 PM2/18/06
to
Erland,

I think you are right. I looked up that project, and in fact I did not
try READPAST at all. Having observed poor performance, I implemented
batches, which reduced amount of database calls and as a side effect
took care of lock contention. I immediately got good performance and
did not drill down any furhter.

Bobus

unread,
Feb 20, 2006, 4:06:50 AM2/20/06
to
Erland's solution seems to work great from our initial tests!

Unrelated question, and probably should be another thread, but have any
of you SQL Server geniuses tried PostgreSQL? Any comments, positive or
negative?

--CELKO--

unread,
Feb 20, 2006, 5:34:55 PM2/20/06
to
In 25 words or less: It is nice but has the feel of a college project
where grad students kept addinf things to it based on the last academic
fad or thesis topic. I would go with Ingres, which has a "commercial
feel" and a great optimizer.

0 new messages