Hi Antonio, Laurence,
It is possible to save full Model and ModelResults (including all Parameter attributes, fit statistics, and arrays) with their `dump() or `dumps()` methods (and re-load with `load() or `loads()` methods). These dumps are JSON-formatted blobs and not particularly nice to look at. They will include serialized numpy arrays for best_fit, independent variables, etc.
But I agree that assessing a series of fits, it is probably more helpful to make a table (spreadsheet, csv file) with a few key fit statistics and parameter values and uncertainties for each fit.
You could do both: dump each ModelResult to a file in a folder (or some kind of database), and also write a summary csv file.
For what it's worth, the dumps are json-formatted blobs and not particularly nice to look at. They will include serialized numpy arrays for best_fit, etc.
I (and I think maybe Laurence too) use HDF5 for lots of large-ish experimental data sets - it's widely used at synchrotrons and similar facilities. I would not recommend trying to put the JSON dump files into HDF5. But, it might be interesting to think about using an HDF5 group as an alternative for JSON for this purpose.
How to format multiple related fits for comparing results as been discussed a few times in the past. We don't have tools to do that. Perhaps it is time to think about doing this?