Impression of withprob() not being so random

44 views
Skip to first unread message

Julien Carponcy

unread,
Aug 10, 2021, 11:25:43 AM8/10/21
to pyControl
Hi,

I am having the issue of random binary choices (either CS+ for go or CS- for no-go trials) using:

outcome = withprob(v.reward_prob)  #[full code attached]

It feels to me after running sessions for over a week that the trials are not so random as I would like them to be. 
Almost always giving me long series of CS+ then long series of CS-, alternation seems very poor with prob=0.5 but also other values like 0.7 or else. Obviously this seems to have a consequence on mice behaviour, in habituation-like, when first CS- of each series being almost always missed then better handled on subsequent consecutive CS- trials (no-go trials).

Is that possible that there is any issue with the seeding of this random function or its implementation from board to framework? 

It is obviously an impression that i can't document right away because i do not have the tools to analyze my trials yet. However it is a very strong feeling. 
While this is figured out, i will probably switch for another random function to see if it looks better.

Thanks very much for your feedback,

Best, 

Julien


reaching_go_nogo_jc.py

Julien Carponcy

unread,
Aug 10, 2021, 11:38:12 AM8/10/21
to pyControl
PS: results of the last session joined. if someone really want to parse that to verify what empirically is happening during the task.
277-2021-08-10-154007.txt

Kyle Ireton

unread,
Aug 10, 2021, 12:30:44 PM8/10/21
to Julien Carponcy, pyControl
Hi Julien and everyone,

I crunched your output in R (see attached html for code and results).

Considered across the whole session, the distribution was effectively equal (170 CS-, 174 CS+).

However I can see clusters of greater density over different periods of time in the session (see visualization of this at end of the document), which could give you the impression it is biased for a while. I can understand that could be confusing for animals if they get locked into a pattern of responses, and I could see the value in being able to limit how many consecutive presentations they could get to minimize that effect depending on what you want. I'm not sure how to program that. 

I can see an argument against limiting streaks based on random probability too, since it could reveal an influence of short-term memory. Depends on what you want, of course. 

The 'streakiness' of this data is something that could also be quantified and I could help with too when I have more time. I do suspect that it will balance out by the end of most sessions, but within shorter periods of time it can definitely look biased.

Sincerely,

Kyle Ireton, PhD

Postdoc | Hanks Lab
Center for Neuroscience
University of California, Davis


--
You received this message because you are subscribed to the Google Groups "pyControl" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pycontrol+...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/pycontrol/be89a083-d900-4ddb-83ee-0f315366d416n%40googlegroups.com.
Julien_random.html

Kyle Ireton

unread,
Aug 10, 2021, 12:32:12 PM8/10/21
to Julien Carponcy, pyControl
Not sure the plot of the distribution across time in session was sent in the html attachment, so here it is embedded:

image.png

Sincerely,

Kyle Ireton, PhD

Postdoc | Hanks Lab
Center for Neuroscience
University of California, Davis

thoma...@neuro.fchampalimaud.org

unread,
Aug 16, 2021, 1:29:49 PM8/16/21
to pyControl
Hi Julien,

Apologies for the delay in getting to this, and thanks Kyle for looking into it.

The pyControl withprob function (implemented in utility.py) uses the micropython pyb.rng() function (documented here) to generate a random integer, which is then used to generate the random boolean with the specified probability.   As the implementation of the withprob function is straightforward, and the underlying random number generation mechanism is provided by the microcontroller hardware (see here and here), I would be suprised if it was not working as intended. 

Nonetheless, to test whether the run lengths of the same outcome (True or False) generated by withprob are as they should be, I generated ~100000 booleans using withprob, using the attached pycontrol task withprob_test.py, generating the attached data file wpt_data-2021-08-16-172242.txt.  The attached file withprob_analysis.py compares the empirical run length distribution to the expected geometric distribution of run lengths.  The empirical and predicted distributions are essentially identical so I'm pretty sure the function is working as intended:
If you want the run lengths to vary over a more limited range then you could use the pyControl sample_without_replacement class (documented here) to sample outcomes from a short list of rewards and omissions:

# In main body of task file where variables are defined.
outcome_sampler = sample_without_replacement([True, True, False, False])

# In state behaviour function where the current trials outcome is evaluated.
reward_this_trial = outcome_sampler.next()

The sample_without_replacement class samples without replacement from the list provided when it is instantiated, starting again with the full list once all items have been drawn.  In the above example where the list is [True, True, False, False], the maximum number of True or False outcomes that can be sampled in a row is 4, as in the sequence  True, True, False, False, False, False, True True.  The disadvantage of generating outcomes in this way is that the previous outcomes become predictive of the next outcome, e.g. if you have just seen the sequence False, False, False, False you know with 100% probability that the next outcome will be True.  This is often not desirable if you are interested in e.g. looking at brain activity associated with prediction errors.

best,

Thomas
wpt_data-2021-08-16-172242.txt
withprob_analysis.py
withprob_test.py
run length distribution.png

Julien Carponcy

unread,
Aug 23, 2021, 8:29:27 AM8/23/21
to thoma...@neuro.fchampalimaud.org, pyControl
Hi Thomas and Kyle,

No worries at all, I was the one not getting back to Kyle in the first place (and to you on a much earlier e-mail... I bougth the plexon LEDs so thanks for that too). So please accept my apologies.

Many thanks to you both for providing me first insights into the data and confirming that was i "saw" was just a cognitive bias of mine. Sorry for even questioning implementation :). I'm happy in that with more training long sequences has not been so much a problem for my first mouse which mastered the task. 

I guess a way to programmatically avoid too much repetitions in a raw would be to establish a random sequences beforehand and explicitly setting the maximum nb of consecutive similar CS in a row.
I'm not sure i would like to go that way though, providing that the mice can still learn after a while, I will be happy for now with unbiased random boolean withprob.

Thanks again to both of you for your help in runing quick checks on data that i have yet to implement analysis tool for. 

Best wishes,

Julien



Reply all
Reply to author
Forward
0 new messages