Your mileage may vary.
For me, such a solution might mean a speedup of about 30 (given the number of graphics cores, the differences in performance, non-optimal parallelization, etc. - but it is a rather wild, but conservative guess).
If you look at pricing on the cloud, using this might cost like 5-10 USD / hour (were also partial hours count).
Given the number of runs one has to do to reach a reasonable result, this can easily go to dozens/hundreds of hours. This might be ok from your perspective, it seems like a bad deal from my point of view. (this is also why modern high-performance computers often take a GPU heavy approach)
So, I think, going GPU first is probably a good idea, then one might want to cascade it later with a multicomputer approach (AWS offers specific GPU instances for the cloud).
But all this is mute, the question is whether any one with the necessary expertise is pursuing this approach.
Or perhaps someone has better data for estimating the kinds of results to be expected.
Klaus