Hi -- it's possible that they changed something in the new version of R -- however, I just downloaded & installed the new R (4.1 for Mac), reinstalled all dependencies from CRAN and then BioGeoBEARS with devtools, and the example script has run without errors.
A google search turns up our own thread on this question, which is a bit depressing, but further investigation suggests this is about a line of code in bears_optim_run that clears old warnings, which is useful so that you see just the warnings in bears_optim_run.
"The error message is Error in assign("last.warning"...) occurres on non-vanilla R platforms (i.e. MRO and RRO), because last.warning is locked by default. To unlock the binding, use
unlockBinding("last.warning", baseenv())."
However, adding unlockBinding to BioGeoBEARS actually CREATED the error on my system where it did not exist before. So probably some peoples' computers have this setting by default in R 4.1.
Instead, I have removed the last.warning stuff and changed to:
"savedwarning <- warnings()" added this to the bears_optim_run() function...which works.
New code below and now on GitHub.
Cheers, Nick
PS for the other error message, see further below.
===========
# Wipe out any old/previous warnings()
########################################################
# NOTE 2021-06-20_NJM:
# As of R4.1, it appears this causes an error on
# some R systems
#
# Error in assign ("last.warning", NULL, envir = baseenv ()):
# it is not possible to add 'last.warning' binding to the base environment
#
# This page:
#
https://stackoverflow.com/questions/5725106/r-how-to-clear-all-warnings #
# ...suggests:
#
# The error message is Error in assign("last.warning"...) occurres on non-vanilla R platforms
# (i.e. MRO and RRO), because last.warning is locked by default. To unlock the binding, use
# unlockBinding("last.warning", baseenv()). This implementation is consistent with ?warning, w
# which says "If warn is zero (the default), a read-only variable last.warning is created."
# – Jthorpe Mar 9 '16 at 21:39
#
#######################################################
# BUT, THIS ADVICE NO LONGER WORKS IN VERSION 4.1
#unlockBinding("last.warning", baseenv())
# Adding the above CAUSES the error:
# Error in assign("last.warning", NULL, envir = baseenv()) :
# cannot add binding of 'last.warning' to the base environment
# assign("last.warning", NULL, envir = baseenv())
# Replacing with:
savedwarning <- warnings()
# ...which works
===========
This other error is probably unrelated:
==============
"Error in calc_loglike_sp(tip_condlikes_of_data_on_each_state = tip_condlikes_of_data_on_each_state, :
Stopping on error in dense exponentiation downpass (right branch): NaNs produced in likelihood calculation. This may mean your transition matrix disallows necessary transitions. Eg, if your ranges are 'A' and 'B', and your model is DEC, then Allowing range 'AB' as a possible state is required, so that you can get from 'A' to 'B' via 'AB' as the intermediate. Alternatively, NaNs can be produced sometimes if your Maximum Likelihood (ML) search proposes weird parameter values (such as a negative rate or weight) or a parameter so small that required transitions have a probability that machine precision rounds to zero or negative. Sometimes this seems to occur because optimx, optimx, etc. propose parameters slightly outside the user- specified upper and lower (min/max) boundaries for some reason.
==============
The most common cause of this is having a complex time-stratification / manual dispersal multipliers / areas-allowed setup, which overly constrains the analysis such that your range data are impossible under the model. This can result in e.g. all-0s in the calculation of likelihoods by state, resulting in a 0/0 error which produces NaN and then the error message. So, remove all constraints and add them back in one at a time to see which one causes problems, then modify accordingly, there are a lot of old help on this on the Google Group IIRC>
Cheers,
Nick