A Thread.MemoryBarrier() might do the trick on the first cacheRef = __cache;
The issue is that every reader imposes a yield. Under mine and most
users scenarios, the __cache will be rarely written to, yet a lock is
imposed on every reader.
Regarding volatile - we may still be OK. Under current code, the thread must yield because lock() {} is called. Under the proposed code, a yield is not mandated when the current cache has the item.
Assuming non-volatile __cache declaration, there are 2 dry-cache outcomes:
1 - that the __cache does not have the url
2 - that the "real" __cache (new one) has it, but the reader sees the old __cache (dry)
In either event, the lock(){} then steps in, and imposes the isolation and full sync with main memory. the __cach is then re-gotten , but never modified. It is copied over and then the main __cache reference is atomically swapped.
The assignment of the __cache object is always done with an effectively read-only value. __cache is not modified because the reference to it is grabbed once. If while a thread is reading __cache another is in the lock, the __cache may be swapped mid code, but cacheRef variable has a point in time reference to __cache. If cache is then modified by another thread, the reader still has the old one, and at most would be compelled to call for a lock and try creating a new MongoUrl() but that would not happen due to the second check.