One heuristic that has been useful is to consider the largest transfer you would want to process at once, and allocate 3 to 4 times as much space, for your processing location (i.e., /var/archivematica/sharedDirectory). This allows normalization for access and normalization for preservation to both be run, as well as the examine contents microservice.
If you process materials, and leave them in the pipeline (for example, waiting at an 'upload dip' question or some other user prompt), and then continue with other transfers, then you need extra space. If you always process one transfer at a time, and run it all the way through to either transfer backlog or to aip storage, then you don't need to allocate extra space to hold the transfers/sips that are not yet complete. But it is a trade off.
The amount of space required for normalization depends on the files being normalized. Turning TIFFs into jpgs for access doesn't require nearly as much space as turning a .mov file into an .mkv for preservation. Some compressed video formats can require 7 to 20 times as much disk space, in my experience (i.e. a 10gb .mov could require up to 200gb of disk space, during normalization.