Hamid,
In addition to sampling [1], there are a few options you can try. One is workload reduction, for instance by using a smaller input set (although this may affect working set sizes!) or reducing the number of iterations if you have an iteration-based application. These are accepted practices so you should be able to find a significant amount of academic literature about the methods and trade-offs involved.
In Sniper itself, there are a few time-consuming models that you can turn off, but that of course requires that you already know how your benchmark behaves to gauge how this will affect accuracy. Instruction cache modeling is fairly slow, so if your application has a small code footprint and isn't generally affected by I-cache misses (check the CPI stack on a full simulation first), you can save some time by not simulation I-cache accesses (-ggeneral/enable_icache_modeling=false). The same holds for branch prediction (use -gperf_model/branch_predictor/type=none to turn it off).
Sniper 5.3 added a new simulation mode (cache-only, enabled with the -ccacheonly configuration option) where only caches and branch predictors are simulated, along with a first-order timing model (one-IPC plus cache and branch latencies). If you don't care about absolute runtime but only about miss rates, this could be an option for you as well.
As for how much faster you'll be able to make simulation using these things, that depends a lot on how well you know your application, and what accuracy you need or are willing to give up. Reducing the application itself can give you good speedup (10x or more) but requires that you have good insight into application behavior to know which parts are relevant. Using cache-only should give you roughly 10x but you'll loose most of the timing information. This method is probably better suited for validation (e.g. making sure that your cache hit rates don't change too much after changing the input set). Disabling simulation models (I-cache, branch) won't give you more than some 10s of %.
Regards,
Wim