Hi Mark,
The quick answer is NO.
The only way to use multiple computing units - in this case CPUs/Cores - at the moment is using the OpenMP parallelisation (i.e. multi-threading). The benefit there is that communication between CPUs/Cores is happening directly and easily as they have access to the same memory (RAM). Such parallelisation is not easy if every computing unit has its own memory and cannot only be achieved by exchanging appropriate information using some kind of a message passing protocol (e.g. MPI). In essence, a domain decomposition parallelisation approach is a lot trickier and it will be as slow as the slowest part of the network that is used to exchange messages and data that is connecting the computing units. In essence, MPI is supposed to be used for such a parallelisation approach that will allow you to run massive models on clusters etc. The current MPI implementation in gprMax is just a simple task farming of independent models that do not require any communication while they are executing. In essence, we use MPI to create "efficiently" a job array.
In a similar way, you can see the multi GPU problem. You can use many cards in parallel to run independent jobs (i.e. models) as the task farming approach used at the moment in gprMax, but you cannot split a big model amongst a number of GPUs without having as a bottleneck the communication at every iteration step of parts of the memory of each of the GPUs. In principle, you can build the algorithm using similar tools (maybe MPI) but it will not be as efficient. This obviously is not available in gprMax.
gprMax does not do any of the domain decomposition stuff but we would like to have a go with the multi-node big cluster implementation which we know that works. Actually, an MSc student tried this year as part of his thesis project but we kind of run out of time.
Best
Antonis
PS It is not a trivial coding effort to robustly do proper MPI domain decomposition parallelisation including all the extra stuff that gprMax does (i.e. PMLs, dispersive media, etc.)