setting hddm priors?

488 views
Skip to first unread message

Guido Biele

unread,
Jul 21, 2012, 12:23:28 PM7/21/12
to hddm-...@googlegroups.com
hi,

i think to have seen earlier in the source code of hddm how priors can be set, but I can't find it anymore. can anybody give me a hint where I find this info, or just email a quick description?
I happy to write up an easy to follow example from that.

cheers-guido

ps: from fitting some models it seems to me that the prior for z has a lower bound of zero. is this correct?
pps: I think kabuki.analyze has a number of use full functions. a simple link to the source code in one of the latter examples would make it easier to find those.

Guido Biele

unread,
Jul 21, 2012, 2:23:55 PM7/21/12
to hddm-...@googlegroups.com
Hi,

I did some more digging, which helps me to ask more precise questions:

I remembered z to be bounded from the figure at the bottom at this page: http://ski.clps.brown.edu/hddm_docs/intro.html

As far as i can tell, the priors are defined in lines 136-154 in hddm.model.py (http://ski.clps.brown.edu/hddm_docs/intro.html).

Unfortunately I am not a python expert and would therefore be grateful about any suggestion about how to change these values before starting the MCMC chain.

I could theoretically change the priors in the source code on my machine, but 2 reasons speak against this:
1) The less important, pragmatic argument for me is that I run my simulations on a large grid where I do not have writing access to the hddm program files.
2) The more important general argument is that it makes most sense to calculate the gelman-rubin convergence criterion with change with different starting points, and as far as i can tell the starting values (init) are coded together with the priors. Moreover, there can be applications were different values should be allowed.*

In summary, I have one key questions:
Where are the prior distributions and starting points for the chains stored in hddm and how can i modify them, without changing the source code.

cheers - guido

* the best way to set up the experiment I am looking at at the moment is to allow positive and negative biases for different conditions.

Imri Sofer

unread,
Jul 21, 2012, 11:42:52 PM7/21/12
to hddm-...@googlegroups.com
Hi Guido,
We are completely rewriting hddm and kabuki so my answers will not be relevant soon.
the easieset way to do it now is by creating your own class and changing parameters that you want to change there.

the following code create a new class called myHDDM which inherit all the functions from the original HDDM class. Then you can just redefine get_params according to your desire.
for instance in this code I've increase the upper limit of a, and the initial value of v and z.
you can also write init=np.random.rand() to have a random initial value for the node

import hddm
from kabuki import Parameter

class myHDDM(hddm.HDDM):

    def get_params(self):

        params = [Parameter('a', lower=.3, upper=10),
                  Parameter('v', lower=-15., upper=15., init=1.),
                  Parameter('t', lower=.1, upper=.9, init=.1), # Change lower to .2 as in MW09?
                  Parameter('z', lower=.2, upper=0.8, init=.7,
                            default=.5, optional=True),
                  Parameter('V', lower=0., upper=3.5, default=0,
                            optional=True),
                  Parameter('Z', lower=0., upper=1.0, init=.1,
                            default=0, optional=True),
                  Parameter('T', lower=0., upper=0.8, init=.1,
                            default=0, optional=True),
                  Parameter('wfpt', is_bottom_node=True)]

        return params

again, this format will be changed in future version, and models will be defined in a completely different way.

guido biele

unread,
Jul 22, 2012, 3:03:32 AM7/22/12
to hddm-...@googlegroups.com
Hi Imre,

thanks a lot, this is very helpful!
One follow up question due to my limited python knowledge:
how do I make sure that I define the model in the next step based on the new class myHDDM?

is it as simple as "overwriting" the old class with:

class HDDM(hddm.HDDM) ...

would I have to do this differently?

cheers-guido
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Thomas Wiecki

unread,
Jul 24, 2012, 6:58:16 AM7/24/12
to hddm-...@googlegroups.com
For some reason this got stuck in the spam filter. Based on your other
email this is probably obsolete?

Thomas

Guido Biele

unread,
Jul 24, 2012, 7:06:47 AM7/24/12
to hddm-...@googlegroups.com
hi,

yes, this is obsolte, I figured it out and we will use is to set random
intitial values for different chains.

cheers - guido
--
------------------------------------------------------------------------

Guido Biele
Email: g.p....@psykologi.uio.no
Phone: +47 228 45172
Website <https://sites.google.com/a/neuro-cognition.org/guido/home>

Visiting Address
Psykologisk Institutt
Forskningsveien 3 A
0373 OSLO







Mailing Address
Psykologisk Institutt
Postboks 1094
Blindern 0317 OSLO




sjoe...@gmail.com

unread,
May 20, 2016, 9:32:47 AM5/20/16
to hddm-users
Ach! So how was this done? I am suspecting that using informed priors and find_starting_values is biasing my gelman_rubin check.

Nathan Tardiff

unread,
Feb 19, 2020, 11:11:35 AM2/19/20
to hddm-users
I'm curious as to whether there are any further updates on this topic? From what I can tell current HDDM versions have default starting values, and the default MAP approximation is deterministic (?), so if using Gelman-Rubin you would not be starting your chains from different starting points? However, I see that in the tutorials Gelman-Rubin is used in conjunction w/ find_starting_values, so perhaps it isn't a serious issue?
Reply all
Reply to author
Forward
0 new messages