I have no programmed c#.net in like forever so I cannot actually give you the code however this is the general theory (providing you don't require replication between the two).
I see you main as described by the following example:
- You pull a large joined dataset from MySQL.
- You wish to house the results within mongo so that they can be fetched again for a certain period of time without renewal (lets say 1 hour)
The desired effect is to speed everything up.
The method I would use is to house just one collection within Mongo called: query_cache.
The schema would be:
{
'_id': ObjectId(),
'user_id': id_from_sql,
'query': we_store_thje_full_sql_query_here,
'result': JSON_encoded_string_of_results,
'date_cached': when_it_was_put_here,
'date_expire':a UNIX TS of when the result set expires
}
When you pull a result set from MS SQL server you will go through that result set changing it into a JSON string. You will then decide how long you want that cache to exist for (1 hour, 2 hour, 2 days etc) and make a UNIX TS of that time. It is also handy to store the user_id so that user specific queries can be gotten reliably.
When you go to do the SQL query next time first check to see if a row in Mongo exists for this user with that query. If so get those results providing they are not obsolete (if they are renew the results and update) otherwise get the results and insert.
The downside to this is large result sets. Since a single document is 16meg big you will suffer when making big queries that require storing 10K plus docs.
This could be sorted in a number of ways:
- Design the Mongo cache store differently to use it like a second db avoiding this problem altogether (means more work and might be hard to implement into a complex SQL system)
- Create a gridfs type specification and driver that allows for the splitting of results into many documents within a collection while having a front collection defining that result set making it easy to query and build up again.
Hope this helps a little, sorry I could not give any code over.