Some of the ideas here relate to the now-old idea of balanced
bootstrapping: see
https://mathweb.ucsd.edu/~ronspubs/90_09_bootstrap.pdf
for example.
I have seen early work on cross-validation for model-selection in
multiple regression where a typical suggestion was to work with
leaving-out 20% of the samples at a time, but that may relate to the
context of overall sample-size and having data that is not from
designed experiments.
But the joint questions "balance" and of "designed experiments" raises
the question of whether any of the considerations of partially-balanced
factorial designs can be employed or extended so as to provide a scheme
to provide slices of the data for treating as units in some
cross-validation or other analysis.
The OP says "However, this requires the assumption that the parameter
and predicted value are normal distributions or student distributions."
This may indicate that the plan would be to do multiple analyses on
small sections of the data, in contrast to doing multiple analyses on
nearly-complete versions of the data where only a small part is
left-out each time. The possible benefits of either approach would
depend on what is being attempted. In theory, if all the usual
assumptions apply, the best answers come from a single analysis of the
complete dataset. That one contemplates doing something else suggests
that there are worries about the assumptions: not having a fixed model
in mind, not having Gaussian random errors, or not having independence
between observations.