Thank you so much for getting back to me, and for hosting this forum - it's a huge help.
I've done iterations of settings and datasets to try and identify where this is coming from and haven't had any luck so far, other than to narrow it down to the one particular data set. I'll list some details below only if you're interested. Given that the initial xR2 in so many other cases should equal the evaluation xR2, I'm curious if you would feel comfortable trusting the initial free search value, or if this sounds like an indication that there could be a deeper issue with the model? For what it's worth, the story that the models are telling us is consistent with what simple linear regression suggests, so I don't feel that it is misleading by suggesting non-important predictors etc.
Thanks again,
Nick
Here's a summary of what I've tried so far:
With a new data set, I have evaluated single and multi-factor models, varying the following settings (starting with defaults, then using neighborhood size minimum and maximum, with over fitting controls aggressive and conservative). All cases yield free search xR2 = evaluation xR2.
I have tried all of the above with data set in question, and all cases the evaluation xR2 drops to a consistent/stable lower number. I used both an untransformed and transformed response which shifts the absolute values but does not change the relationship between search and evaluation.
Out of curiosity I doubled the data set in question, and this too changes the the xR2 values but does not change the relationship (the decrease) between free search and evaluation.