For a long term solution, I recommend upgrading at least some of the nodes with high endurance SSDs, to act as node-local scratch storage.
If you are using SLURM as the queuing system, it should not be too difficult to put the upgraded nodes into a separate "highIO" partition, with a different TmpDisk value.
If node upgrades are not feasible, then the advice given above is probably the best you are going to get. You either convince the user to use more approximate methods that use less I/O or figure out how to hook up your queuing system to give Molpro jobs a ramdisk.
I have one more thing to add. It may be possible to mitigate the I/O deluge by (ab)using the Linux page cache's ability to cache writes in RAM for a long time. It works well if you have lots of free RAM and slow node-local storage, but I have no idea if GPFS uses the page cache for writes, like normal block devices do. We use it to spare our SSDs from as much write as possible, but it requires some risky settings in /etc/sysctl.conf.