I'm interested to hear how folk automate pre-production audio leveling of backing tracks?
Many shows I work on are based on backing tracks: music, choir and dance concerts, fashion shows, and so on
Dynamic range is often wide, sometimes inherently, say, for classical music, and sometimes because the track is a mash-up, say, for dance tracks
Normalising gives consistency between tracks but doesn't help within tracks so I'm interested in how folk manage wide dynamic range within backing tracks
Obviously it's possible to ride faders on a console, and QLab's integrated fade is also useful, but both of these are relatively hands-on in performance or preproduction respectively, and I'm interested in automation to get closer to set-and-forget
One possibility is applying compression in performance on a console or in QLab via the AUDynamicsProcessor audio effect
A related option is pre-processing tracks with compression as well as normalisation using, say, Audacity or ffmpeg or Logic Pro or GarageBand etc...
And a step further down this path is 
auphonic which is intended for podcasts and similar spoken content but has options for leveling music as well, removing the need to decide compressor settings such as threshold, ratio, attack, release, and make-up gain
I've gone pretty deeply down this rabbit hole, using r128x-cli referred to in the 
Level Playing Field QLab Cookbook recipe to contrast and compare different approaches
What do others do to level backing tracks? What methods or configurations work well?