spread2 tests breaking when adding code that should be "neutral" (i.e. not effectively used)

6 views
Skip to first unread message

ceresv...@hotmail.com

unread,
Feb 19, 2018, 7:01:23 PM2/19/18
to SpaDES Users
Hullo SpaDers

So I've been trying to add a persistence probability argument to spread2() (currently only present in spread()).

Adding the necessary code to reproduce fire persistence in a cell for a given probability was easy and the tests that I developed to test the behaviour of the new argument show that it's working well - much like spreadProb, the new argument supports both an integer of length =1 input, or a raster of persistence probabilities per cell.

My problem arose when I ran the remaining tests. To do so, I set the persistence probability to 0 (the default value), which in this case shouldn't alter the behaviour of the function at all (and it doesn't when I run the function line by line).
After having tried to solve this for two whole days I'm getting desperate. surely it's something obvious, but I can't spot it.

It's hard to provide a reproducible example, because reproducing the error involves switching on-off the function code. So I've created to GitHub branches that differ in the "active" code lines (lines 959 and 960 of spread2.R). See attached R script.

Warning: don't try this @PFC during the morning, as it involves downloading SpaDES.tools twice.

If anyone as the patience/time to go over this I much appreciate your input!

Cheers and happy SpaDESing!
Ceres
spread2_persist_testerror.R

Eliot McIntire

unread,
Feb 19, 2018, 7:21:32 PM2/19/18
to SpaDES Users
The answer lies in the random number generation and what the tests are doing.

1. Random number generation: if you so the following:
set.seed(10)
a1 <- rnorm(10)
b <- sample(1)
a2 <- rnorm(10)

# Second
set.seed(10)
b1 <- rnorm(10)
b2 <- rnorm(10)

# what is this, TRUE or FALSE?
all.equal(a2, b2)

-- it's FALSE because we drew a single random draw with the "sample", so the two sequences are out of sync, so , a2 is not b2.

2. The tests: The only ones you showed were ones that were very much dependent on the random number sequence... specifically, numRetries is a column that indicates how many times an active pixel "tries" to spread to another pixel, but fails because of things like there being no neighbouring cell that is available to spread to. This would happen randomly sometimes, so the exact sequence or random numbers matters for "numRetries". 

Setting persistProb is causing a new generation of random draws that didn't exist before, so any test that is dependent on the random number generator will fail.  

In this case, assuming that there is no breakage in the behaviour, the solution is to change the test criteria so that it passes, given the new code pieces.

Another solution is to avoid the random number generator in the code:
if (persistProb > 0) {
  the stuff to do
}
if you add a single random number generation to a sequence

ceresv...@hotmail.com

unread,
Feb 19, 2018, 8:05:44 PM2/19/18
to SpaDES Users
Riiiiiight. I think I understand. 

I still don't fully get why calling sample() between a1 and a2 changes the result of a2, relatively to b2, but for now it's enough to know that it does.

I'm not sure I understand theses tests well enough to change their behaviour while ensuring that they're changing the right thing. So I might just use the workaround you suggested.

Thanks @Eliot :)
Reply all
Reply to author
Forward
0 new messages