I didn't use any open source performance testing tools for the write performance benchmarking, because I want to benchmark the whole-system-of-systems performance, including our bespoke ETL application (though that doesn't appear to be anywhere remotely close to being the bottleneck, it turns out).
Earlier this year, I did start setting up some read performance benchmarks. For those, I used jMeter, because I've had to use it in the past and was already somewhat familiar with it. Not my favorite tool, though, and I've heard good things about gatling. If you do end up using it, I'd strongly encourage you to start here with your test design:
Gatling Docs: Scaling Out. It's very important to ensure that you're running any read tests across multiple client machines, with the server hosted on a separate system, as it's otherwise pretty easy to just end up hitting the ceiling the performance of whatever single host you're running on. That's what I did with my jMeter tests, and I found that Ansible made that kind of orchestration quite easy. (I never did really finish those read benchmarks, but they're nonetheless on GitHub here:
https://github.com/HHSIDEAlab/fhir-stress-test. The README is wrong, btw-- copy-pasted from another project.)
Best regards,