It depends on how big id setup a microbench.
With a bit of work you should be able to load and serialized 20 Meg in less than 1/10 of a second. If its too big snapshot the data or put them in a separate data store , xml blobs store much better in event stores/ files/ table stores than a DB .
Its more a question of data storage .. In my case fetching large blobs by key was much faster than say querying 10-12 SQL tables. Remember latency in non cached IO is significant.
If you treat it right XML blobs / messages go very far . If you do stupid things eg completely default generation of complex objects , serializing data sets you will quickly be in trouble.
Also with XML removing all namespaces , using smart default values and small names can reduce the size significantly ( a factor of 2-3) . It also compresses very well 10:1 can be common for large XML blobs ( a single request will be slightly slower ( especially if data cached) but your not hogging all the disk IO so you get more requests) .
The big thing to be careful of with XML blobs is re writing large ones on small changes but this is not really an issue for this.
Ben