Hi Phillip,
When
faced with a similar situation we were concerned with startup/restore
efficiency of parsing all that JSON every time the program was run, we
concluded a lazy evaluation strategy worked best for us and we
decomposed the object for serialization & storage, later it was
reconstituted when referenced. Your additional step of compression
would complicate this, but the basic idea holds.
If
you really need a monolithic snapshot, your plan of implementing your
own based on existing deep-copy and stringify code is probably your least surprise-free permanent solution. If you publish it you may
discover you're not the only one reading and writing large JSON objects.
However,
if it is possible to leverage your knowledge of the data to
decompose it for serialization, a small amount of JS to paper over the
decomposition may be all you need.
-J