Mel,
I'm sure Al's beautiful DB locking solution will work just fine :-)
But just in case you want option #14 on how to do this...
I'd look towards moving the logic to create the unique ID outside the same database transaction that also inserts the row in the DB. Decouple the problem. A simple service can return the next ID (in-memory call) and this ID is included in the newly inserted row, thus avoiding the DB locking problem you have by doing the next sequence and insert in same transaction.
I've implemented services like this before and they typically have a small persistent table to store values between service stop/start. On start-up, the service updates the table to "reserve the right" to hand out the next 100 sequences independently. Any calls to this service for the next sequence is an in-memory lookup to a local variable to return the next sequence. The variable read should be synchronized to avoid concurrent threads returning the same sequence. In Java, this is done with the synchronized keyword. I believe in Ruby this is done with
Mutex.
The major benefit to this approach is it's easily scalable by starting up more services, on same server or across many servers. Each service just needs to coordinate using the same persistent table, which they only update every 100 transactions to reserve the next sequence range.The 100 sequence range size is configurable of course, I'm just using it here for illustration.
The downside to this approach is you can have gaps in your unique ID's. If the service reserved the sequence range 0-100 in the table on start-up, then gave out 1-37 and crashed, when it started up next it would reserve 101-200 and start again. There would be no sequences 38-100.
So if you absolutely have to be contiguous with no gaps, this approach won't work. But if its OK, then this approach is efficient and scales nicely regardless of the underlying DB. You can lower the threshold to minimize data gaps, raise it to increase concurrency and throughput.
I fully admit this is overkill if you have low transaction volumes or low concurrency! But, if not it's an approach you or others may consider.
-ryan