TL ; DR:
Use the 3-0-stable branch of rails till either a new release candidate or final is pushed, which ever comes first. Use the master branch of the adapter.
Version 3.x of the adapter and moving forward technically only supports SQL Server 2005 & 2008. This is due to the use of the ROW_NUMBER function for limiting/offset (taken/skipped) support. Personal bummer to me, but if you feel you got the skills, patches to support 2000 are welcome. Despite that, he adapter has a 2-3-stable branch and I will try to have non-rails 3 patches go to both branches.
We pass all the tests – but please double check my work! Especially in the arel SQL compiler which we are hosting in the adapter's lib folder. I was able to kill a lot of string analysis methods that we were forced to use in the old adapter and I like where things have ended up, but I get the feeling that there are going to be some exposed errors when someone starts using the underlying relations in a more dynamic way.
The general rule is this. We do a lot of work to "help" ActiveRecord make good SQL for us, but if you are building the SQL with Arel, you should know the rules yourself. My hope is that our ActiveRecord coercions do not stymie building your own relations correctly. In some cases the compiler should help both you and ActiveRecord. One good example is where I am automatically correlating multiple joins on the same table. We'll just see how it goes. Lemme know.
I just did something a tad bit drastic but really really helps us out. Basically I made the quote method able to ignore the passed in column for strings so we can quote all utf8 strings with N'...' style quoting. This means we get the upshot of passing in utf8 strings as conditions or stored procedure calls and get the proper quoting. I've been pleasantly surprised that all tests continue to pass.
That still leaves strings coming out of the DB. Klaus has pointed me to some great articles online and there is a ticket on the issues page. Here are some links.
I had a recent blog article that dissed the utf8 version of ruby odbc as totally pointless and I think I was wrong. Seems to me that if you use that version, you get the bonus of all strings being encoded as such and that is not as bad as I thought. Would seem to make non ASCII column names work, among other things. All without have to place force_encoding all up and down the adapter since regular ruby odbc returns ASCII-8BIT/BINARY data.
Now I could be wrong about this and that does not cover others requests to be able to accommodate easy points of integration where others cold use iconv to force something like CP1251, but I think it is a good start per Yehuda's recommendation. Besides, I test ruby ODBC with utf8 and and we still pass all the tests. If your keen on this type of stuff and want to add to the conversation, please add your thoughts to ticket #40.
The current adapter is tested under these ruby versions: ruby-1.8.7-p299, ruby-1.9.1-p378, and ruby-1.9.2-rc2. I used the 0.99991 version of ruby odbc for most of my testing. If you want to use ruby 1.9.2, you WILL HAVE TO USE the 0.9.9992 pre release version of ruby odbc. The latest is link I have is below. I have even tested 0.99992 on all other ruby versions too, works great.
If you want to contribute and or test the adapter, check out the RUNNING_UNIT_TESTS file as it contains updated info on some easy ways to work with bundler and the rails/arel code base with bundler. You will also need to apply my latest patch to rails which changes the schema.rb file in a few key places.