I could give you a theoretical answer, but it'd only cost you a couple of dollars to find the actual answer for your real world data.
The stated design center for Refine is < 1 M rows, but that's a simplistic rule of thumb. Refine will use as much memory as you can give your JVM. Refine gets a certain amount of parallelism through the use of separate threads for various types of operations (handling connections, background save processing, etc), but the core GREL algorithms themselves are generally not parallelized (ie it won't run N copies of an algorithm over 1/N row sets even if the operations are easily decomposed).
In my experience most performance problems with Refine are caused by either a) heap thrashing from too little VM or b) browser issues with very large tables (e.g. many columns) or lists (big facet choice lists). If you look at the JVM stats for the 12M heap JVM with your 5M row data set, you may be able to get a sense for whether heap contention is an issue.
My gut feel is that 5M rows should be easily doable with sufficient memory, but the only way to tell is try it.
Tom