Re: Interpreting inconsistent results

16 views
Skip to first unread message

StellaSeaDragon

unread,
Oct 6, 2015, 8:10:43 PM10/6/15
to HyperNiche and NPMR
I conducted several NPMRs with the same response and predictor variables from 3 separate locations and then once with a combined dataset. The best models picked out some unexpected predictors that frankly just don't make sense, while not choosing others (ie with strong univariate correlations with the responses; NMS indicates "important"). Is this common? I know predictive doesn't mean ecologically or biologically important, but now I'm not sure how to interpret my results. 

The best models also chose different predictors for the different locations and the combined dataset. I'm starting to lose confidence in this approach for having the utility to help us streamline our sampling by eliminating variables to measure or to identify features that aren't super location specific. Can anyone comment?

Bruce McCune

unread,
Oct 6, 2015, 10:29:49 PM10/6/15
to hyper...@googlegroups.com
This reminds me of Abbey Rosso's PhD thesis, where she measured the same things in the Coast Range and the Cascade Range, in each case with two different methods. The amazing thing was that the "answer" in each of the 4 cases was different -- and some of them were non intuitive.

Your situation and Abbey's are, I think, a manifestation of a central problem in ecology, that the results of ANY experiment or comparison depend on the context. I would argue that in your case, and in Abbey's, the central result is that "the answer" depends on the temporal and spatial context. I recommend embracing and understanding this result rather than trying to sweep it under the rug! The fact that the results from one location are counterintuitive is probably trying to tell you something.

Another example is in the literature of experimental ecology in rocky intertidal systems on the West coast of North America. This work tended to re-use the same sites, but lo and behold, when similar experiments were tried in different sites, the results were completely different.

I wonder if ecologists truly understand the depth of this problem, instead expecting generalizations from one area to extend to others. The context-dependency of ecological results can be maddening, but really, it is a characteristic part of our work.

One qualification to this: if your predictors are intercorrelated, then two apparently different models can actually be quite similar, if one variable can reasonably substitute for another. My statements above assumes that you are aware of this and that a deeper issue is at stake.

Hope this helps.
Bruce McCune

On Tue, Oct 6, 2015 at 5:10 PM, StellaSeaDragon <marine...@gmail.com> wrote:
I conducted several NPMRs with the same response and predictor variables from 3 separate locations and then once with a combined dataset. The best models picked out some unexpected predictors that frankly just don't make sense, while not choosing others (ie with strong univariate correlations with the responses; NMS indicates "important"). Is this common? I know predictive doesn't mean ecologically or biologically important, but now I'm not sure how to interpret my results. 

The best models also chose different predictors for the different locations and the combined dataset. I'm starting to lose confidence in this approach for having the utility to help us streamline our sampling by eliminating variables to measure or to identify features that aren't super location specific. Can anyone comment?

--
You received this message because you are subscribed to the Google Groups "HyperNiche and NPMR" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hyperniche+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

StellaSeaDragon

unread,
Oct 7, 2015, 3:33:40 PM10/7/15
to HyperNiche and NPMR
Thank you for your perspective, Bruce.

After looking at some of my results more closely, I have found exactly what you mentioned; very context specific relationships and it starts to make sense even if this means it doesn't help us towards our goals (ie the results are "useful" in terms of applying what we know to streamlining sampling, making decisions). 

I also wanted to know what I should do with xR2 values that are very small (<0.1). Does this mean that there isn't a good predictor or suite of predictors for my response that I measured? 

Thanks.

Bruce McCune

unread,
Oct 7, 2015, 8:33:52 PM10/7/15
to hyper...@googlegroups.com
Yes, your conclusion on the small xR2 values is correct.
Bruce

--
Reply all
Reply to author
Forward
0 new messages