You should first decide whether you are interested in the predictive performance for one new trial or for one new subject. For example, in brain research it is usually more interesting to estimate the model predictive performance for new subject, so that it is more justified to say that the model generalizes to other people than just the observed people.
WAIC is an approximation of leave-one-out cross-validation and works if removing one data point changes the posterior only a little. In this case I would assume that removing a single trial observation would change the posterior only a little and WAIC would work well.
If you are interested in leave-one-subject-out you could consider the likelihood per subject, but since it is the total likelihood contribution from that subject, you need to compute as much as in leave-one-trial-out. WAIC approximation for leave-one-subject-out can work if removing one subject changes the posterior only a little, but since you are removing more information it is more likely to have a bigger change, too. In that case, you would need to make the leave-one-subject-out cross-validation by using k-fold-cross-validation where in each fold you leave on subject out.
I don't have a figure showing how WAIC would probably behave in leave-one-subject-out case, but WAIC has a certain similarity to importance sampling cross-validation, and you may look Figure 11 in
Aki Vehtari and Jouko Lampinen (2002). Bayesian model assessment and comparison using cross-validation predictive densities. Neural Computation, 14(10):2439-2468.
In that specific case, leave-one-data point-out worked, but leave-one-group of data-out did not.
Aki