Hi Julien,
Apologies for the delay in getting to this, and thanks Kyle for looking into it.
The pyControl
withprob function (implemented in
utility.py) uses the micropython
pyb.rng() function (documented
here) to generate a random integer, which is then used to generate the random boolean with the specified probability.
As the implementation of the withprob function is straightforward, and the underlying random number generation mechanism is provided by the microcontroller hardware (see here and here), I would be suprised if it was not working as intended.
Nonetheless, to test whether the run lengths of the same outcome (True or False) generated by withprob are as they should be, I generated ~100000 booleans using withprob, using the attached pycontrol task withprob_test.py, generating the attached data file wpt_data-2021-08-16-172242.txt. The attached file withprob_analysis.py compares the empirical run length distribution to the expected geometric distribution of run lengths. The empirical and predicted distributions are essentially identical so I'm pretty sure the function is working as intended:
If you want the run lengths to vary over a more limited range then you could use the pyControl sample_without_replacement class (documented here) to sample outcomes from a short list of rewards and omissions:
# In main body of task file where variables are defined.
outcome_sampler =
sample_without_replacement([True, True, False, False])
# In state behaviour function where the current trials outcome is evaluated.
reward_this_trial = outcome_sampler.next()
The sample_without_replacement class samples without replacement from the list provided when it is instantiated, starting again with the full list once all items have been drawn. In the above example where the list is
[True, True, False, False], the maximum number of True or False outcomes that can be sampled in a row is 4, as in the sequence True, True, False, False, False, False, True True. The disadvantage of generating outcomes in this way is that the previous outcomes become predictive of the next outcome, e.g. if you have just seen the sequence
False, False, False, False you know with 100% probability that the next outcome will be True. This is often not desirable if you are interested in e.g. looking at brain activity associated with prediction errors.
best,
Thomas