I would look into partitioning the tests into batches, so that keystone tests are in earlier batches and fine grained tests are in later batches.
I would also pull all the fast tests into earlier batches.
If you collect the average test execution time for each test case, you can sort tests based on increasing execution time and fail fast (since a regression is a regression... though it depends... I find it can be handy to know that there is only one failing test versus there being 500)
Similarly, any tests that failed last time, irrespective of how long they take to run, should probably be run first.
Armed with per-test coverage data, per-test execution time, and the set of failing tests from the last run, it should be possible to batch your regression suite into say 4-8 batches and fail based on the results from a batch.
The first batch should be something like the 15% fastest tests plus any failing tests from last time plus make up the rest to give the highest coverage with what remains.
I'd go with a 3:2 split for execution time to coverage... that's purely a guess though, so e.g. your batch is 1000 tests, pick the top 600 tests ranked by shortest execution time, sum their coverage and then pick the top 400 tests to fill the gaps in coverage (irrespective of execution time)
If you have a lot of historical data, you would identify brittle tests and fragile tests...
* brittle tests are the tests that developers keep on breaking... they should be executed in the early batches... a brittle test is one where the test fails after a commit.