First of all, you need to clarify performance regression testing. What
kind of performance? Functional performance? System performance?
System reliability/stability? Or said another way, performance of
what? Such as functionality, CPU usage, memory usage, response times,
network bandwidth, system usability?
If it's functionality and usability, then RF might be of use. And the
methodology in my mind would be to run RF tests in loop X times, then
run custom report analyzer and check how often tests pass (individual
tests or the whole suite). And so the individual tests or whole suite,
etc. has to have a certain pass threshold (e.g. some % or amount of
tests must pass on average of X runs), to meet regression pass
criteria. You could have RF output tests in XML, then run an XML
parser to gather the x XML reports and analyze them all.
If it's system CPU/memory usage and stability, then RF might not be
the tool of choice. You monitor the system with monitoring tools
during test run and run test w/ tools like JMeter with simple checks
like verify HTTP 200 ok (for web requests) against requests to
commonly used web pages or web service APIs, etc. And access pass rate
from that as well as CPU/memory performance from the monitoring tools.