> Sorry, I am not sure I understand what you mean by rotating the training and validation split. Could you clarify?
Say you have a dataset [1, 2, 3, 4, 5], which you split into [1, 2, 3] and [4, 5]. If I am not mistaken, Predefined split inside the SFS will then use it as
Training: [1, 2, 3]
Validation: [4, 5]
then rotate it
Training: [4, 5]
Validation: [1, 2, 3]
And then compute the validation performance as the average. It's actually not a bad thing to do but it may not be the intended thing. In my case, I want to use the simplest possible holdout validation (no cross validation) -- in my case, this is for teaching purposes.
Best,
Sebastian
> On Sep 24, 2018, at 10:54 AM, B K <
bernade...@gmail.com> wrote:
>
> Hi,
>
> Sorry, I am not sure I understand what you mean by rotating the training and validation split. Could you clarify?
>
> My intention was setting up the cross validation, so that the validation step does not use a subset of the over-sampled data but a sample that's untouched (just to be careful and not tune the model possibly to the over-sampled data).
>
> Is this not what the PredefinedSplit() function can be used for? Should I rather use PredefinedHoldoutSplit()?
>
> Thanks for commenting again and putting more thought into it.
>
> Cheers