Migrating to NH 5.1 from NH 2.1 - a bridge too far?

111 views
Skip to first unread message

Miles Thompson

unread,
Aug 4, 2019, 12:12:47 PM8/4/19
to nhusers
I'm working on a project where I just upgraded NH 2.1 to 5.1. I wonder if folks here feel that is 'too much' of an upgrade to bite off in one piece? You gut reactions and first impressions very much appreciated. 

In the 'cons' column its a large(ish) ten-year-old system I'm not all that familiar with. On the pros side, you always want to be at the most recent version and (thankfully due to some good architecture decisions ten years ago) most of our NH persistence code is fairly well 'encapsulated' -  so I only need to look in a handful of projects to find the bits we need to change. Also, I've successfully upgraded the code from an 'it builds' perspective already it passed a super basic smoke test. I have been carefully looking through the release notes. 


However there are some more subtle regressions on further testing. The biggest concerns I have found so far are:

 - We're doing this work for performance reasons and, so far, the shift to 5.1 seems to drastically decrease performance rather than improve it. More specifically, we made things more async at WCF level last week and this has had a huge payoff - when done against 2.1 of the NH library. However when we couple those async service changes with the upgrade to NH 5.1 all the performance improvements disappear. Perhaps this is related to transaction persistence as mentioned in this other thread (I can't wait to try) 

- After the upgrade, we are getting the old "Cannot simultaneously fetch multiple bags" problems in a couple of places which looks kinda gnarly
 
Basically, I'd really appreciate the opinions of anyone who has been through this before? Is it worth pushing through with finding and fixing any regressions or are we biting off months of work to finish a 2.1 -> 5.1 upgrade and maybe better to just upgrade within 2.x? Or just to 3.x?  

Thanks in advance for any thoughts or opinions !

Miles Thompson

unread,
Aug 11, 2019, 7:32:20 AM8/11/19
to nhusers
So.. for anyone else that comes across this message in the future, I'll mention that we decided not to do the upgrade at this time.

Our objective was performance improvement and fixing subtle regressions was not on our timeline at this time. 

I'll admit I was a little disappointed not to have any other input or comments from others regarding this question, but I suppose that in itself is a bit of answer also.  ;_) 

Thanks,

Miles

Frédéric Delaporte

unread,
Sep 8, 2019, 12:43:05 PM9/8/19
to nhusers
Most of the transaction rework was about system transactions (transaction scopes). Regular transaction handling was not modified much. Using
<entry key="transaction.factory_class" value="NHibernate.Transaction.AdoNetTransactionFactory"/>
as suggested on the other thread you point is valid only if you do not use system transactions. This transaction factory just ignores transaction scopes. Moreover, the system transaction handling of the default transaction factory is normally not involved when using regular transactions, so switching the transaction factory should not have much impacts on projects only using regular transactions, included on performances, while it is a bug source for other projects. Between the two transaction factories, the main difference when only using regular transactions is the fact the system aware one (and default one) will check on many occasions whether a system transaction is ongoing or not. This can be avoided by disabling the auto-join transaction feature, and explicitly asking the session to join the system transaction when you use one. https://nhibernate.info/doc/nhibernate-reference/transactions.html#transactions-scopes

The rework on system transactions has introduced thread synchronization logic to avoid concurrency troubles occurring with distributed transactions. So far it seems to have fixed most of the issues NHibernate was having with distributed transaction scopes, so it is unlikely it will be undone. Maybe a "non-distributed system transaction" factory could be added for those not needing to support distributed transactions, and supposing the performance loss comes from these synchronizations. But things are very tricky there, a scope can be elevated to distributed on many unexpected cases. (Using ODBC data provider, or providers from SAP products (Sybase, Hana, ...) seems to always cause all scopes to be distributed; some other data-providers will go to distributed if opening, closing, reopening a connection in the scope, while others will not; ...)

About the bag troubles, it is most likely pointing to a latent bug in the application code, causing results to duplicated with older releases. Maybe your application has some mechanism to compensate for this, but for upgrading, you will have to handle them by avoiding fetching more than one bag. Frequently a bag is used in mappings where really a set should be used instead, so adjusting the mappings could be one way of solving the trouble.
Reply all
Reply to author
Forward
0 new messages