I've blown quota using Globus transfers many times.
When this happens, the Globus transfer stops (but does not fail), goes into an error condition, and notifies me. I can then either cancel the transfer, or clear some space. If I clear enough space so that the transfer can proceed, the Globus error recovery logic simply continues the transfer after a minute or so, and the transfer completes normally.
Are you trying to protect users from themselves, or ensure that an automated workflow fits within quota constraints? In either case, it seems to me that the user should be aware of what they are doing, and make allowances to manage their space. For example, I could start a big transfer that won't fit in quota, but shuttle files out of my storage area once they complete, thus making more space. If the data portal I was using to launch the transfer didn't give me a way to subset the transfer into bite-sized pieces, I would still be able to get my data moved. If you enforce quota at job start time, my options are limited.
Just trying to think about multiple angles here....