I'm curious about this as well; I'm currently building an application
that may use multiple independent databases, and want to have the
ability to do object replication among them at some point in the
future for redundancy. Database-generated integer IDs are convenient
and all but they don't scale well to this sort of application!
I've been designing and testing tables in my Rails app (on MySQL)
using a UUID for the 'id' column, which is declared as a VARCHAR(40)
with a unique index, and so far I have not encountered any problems.
Anybody see any real problems with it, conceptually? The problem with
handling it all within a MySQL model, as previously noted, is that you
cannot currently assign the value of a function as the default value
of a column in MySQL, except for a couple of timestamp-type columns.
Here's what I've got in my Rails app:
LOWER( UUID() )")
And then in each model:
self.id = UUID.new
And finally, in my table creation migrations:
create_table :hot_folders, :id => false do |t|
t.column :id, :string, :limit => 40, :null => false
add_index('hot_folders', 'id', 'UNIQUE')
If you were on a different database engine, or if I am someday, you
could update the uuid.rb lib file to do something different and
appropriate for the database you're using - or generate a UUID
programatically instead of via the db - via a series of IF/ELSIFs.
(IF mysql ELSIF sqlserver ELSIF postgres ELSE ...)
The above table creation will be somewhat inefficient for MySQL's
InnoDB engine, as ISTR that if you don't declare a unique ID during
table creation, it creates one internally. When I get to load testing
& whatnot that may mean dropping and recreating the tables to avoid
the extra overhead of two unique columns, one of which I never use.
On Oct 30, 11:55 pm, "Michael Graff" <skan.gryp...@gmail.com> wrote: