Hi! Glad you've made progress...
Yeah, my semi-creative thing was a bit opaque, I didn't have a lot of time.
Basically, here is was I was thinking:
1. The way that the manual dispersal multipliers and distance matrices work is that they take the base dispersal rate (d, or a) (or the weight j, for j events) and, for a given pair of areas, multiply by the value in the multiplier matrix. Then likelihoods etc. are calculated conditional on that.
2. The traditional way to do this (e.g. in Ree & Smith 2008) was to just use user-specified dispersal-probability multipliers. E.g., maybe you would say that
- dispersal between North and South America had a multiplier of 1 (no change from default
- dispersal between South America and Africa had a multiplier of 0.1
- dispersal between South America and India had a multiplier of 0.01
3. These were -- and everyone would admit this -- basically made up numbers. But, they did express a subjective sense of the relative probability of dispersal between close and far regions. And, by using a multiplier of 0, you could completely disallow dispersal between regions if desired (e.g. if only allowing dispersal between land-connected regions). So they constitute valid hypotheses/models, at least.
4. In Van Dam & Matzke (2016, J. Biogeog), we proposed using a distance matrix in a similar way:
d_actual = d_base * distance^x
a_actual = a_base * distance^x
j_actual = j_base * distance^x
This is the "+x" model. By default, x=0, which means the distance matrix (if specified at all) has no effect. If you set x to be "free", and thus estimated with the other parameters, then:
x<0 means that dispersal rate increases as distance increases (which seems likely!)
x>0 would mean that dispersal rate decreases as distance increases (which seems less likely, I've never seen a case, but perhaps competition or some such could produce this if e.g. areas are adjacent and within-continent or something)
(I recommend people use relative distances rather than literal meters or whatever, just to avoid scaling issues during ML estimation from huge numbers in the multipliers)
5. In Dupin et al. (2017, J. Biogeog), we propose a modification to the manual dispersal multipliers, the +w model, where
d_actual = d_base * (manual_dispersal_multiplier)^w
...etc...
By default w=1, which means you want to use the manual dispersal multipliers exactly as the user specifies them. (If no manual dispersal multiplier matrix is specified, then the multipliers are just 1, and 1 to any power equals 1.)
6. There is also an "environmental distance" matrix option, this works identically to the +x model, except the parameter n is used. I have used this only in talks at the moment, but people are free to use it. Obviously this second distance doesn't have to actually be "environmental distance", it can be any distance you think might explain dispersal.
7. Any of 4, 5, and 6 can be time-stratified.
8. It is fairly trivial to add yet more distance/multiplier matrices, this would take a bit of custom code from me however.
So that's all background...
My "semi-creative" idea was, if you want a different rate for timeslice 2 compared to timeslice 1, all you do is set up a time-stratified multiplier matrix:
=========
A B
1 1
1 1
END
(see the help page at phylowiki for file for exact file formats)
...and then make the parameter "w" free. "w" will have no effect in the top timeslice, since all of the multipliers are 1:
...which obviously means
a_actual = a_base
But in the second time slice, the rate will be
a_actual = a_base * 0.1^w
...and you estimate "w" as another free parameter. Then to get the inferred rate for timeslice 2, you just use the equation above, you use the ML estimates of "a_base" and "w" to calculate a_actual for timeslice 2.
If you need more timeslices with different rates, or you want to model asymmetric rates, just use more distance matrices, except instead of distances just put multipliers of 0.1 for the rate you want to modify.
I suppose all of this is slightly more onerous than a custom system, but it takes a lot less programming.
Also, all of these model variants create nested pairs (w free vs w=1, w free vs. w=0, etc.) that make it easy to make a table of pairwise likelihood ratio tests, or a table of AICc comparisons to compare all the proposed models at once.
(I recommend reading the books by Burnham and Anderson (2002) or especially Anderson (2008) to get a sense of the philosophy of model choice before creating hundreds/thousands of model variants -- you want to have a reasonably moderate number of scientifically interesting models, not every possible combination of model variants justified by nothing more than uncertainty. Especially worrisome would be having more model variants than species/data points.)
Hope this helps -- if this is mystifying and/or you need something more that can't be covered by this approach, I'd be opening to discussing further on email. Ideas that need new programming from me are more in the "if you want a coauthor" category, which may or may not be needed/desired by you/your lab etc.
Cheers!
Nick