Regularization values vs. Regularization multiplier (beta multiplier)

1,315 views
Skip to first unread message

Nolan Helmstetter

unread,
Feb 14, 2019, 11:23:57 AM2/14/19
to Maxent
Hi all,

Wondering if anyone has any ideas/info/links to literature explaining the below highlighted results. I am familiar with what a regularization multiplier does (a.k.a the betamultiplier in the output, not sure why the name change) however I am not exactly sure what the regularization values mean. Furthermore, I understand how to change these values (in the experimental settings). In this case I set the regularization multiplier to 5, but regardless of what I set the multiplier to, these values always stay the same. I was under the assumption that whatever user defined multiplier you set was the multiplier used on all feature classes. This was just an example model I ran to get some results. 

Thanks,

Nolan

Maxent.jar results from one k-fold replicate:

The follow settings were used during the run:
7073 presence records used for training, 1769 for testing.
10269 points used to determine the Maxent distribution (background points and presence points).
Environmental layers used (all continuous): clay cnpycvr
Regularization values: linear/quadratic/product: 0.050, categorical: 0.250, threshold: 1.000, hinge: 0.500
Feature types used: hinge linear quadratic
outputdirectory: /Users/nolanhelmstetter/Desktop/Maxent_Test/Test_5
samplesfile: /Users/user/Desktop/Maxent_Test/all_lcts_mxnt.csv
environmentallayers: /Users/user/Desktop/Maxent_Test/ASC_All_Together_Strat
betamultiplier: 5.0
replicates: 5
product: false
autofeature: false
Command line used: 

Nolan Helmstetter

unread,
Feb 14, 2019, 12:50:00 PM2/14/19
to Maxent
Think I figured this one out. Check out Phillips and Dudik 2008.

The regularization multiplier (in my above example = 5) is multiplied against the tuning parameters (beta multiplier or regularization multiplier) that were determined for each feature class in Phillips and Dudik 2008 table 3. The reason I was curious about this is because I used ENMeval to optimize my regularization multiplier. If I'm not mistaken, ENMeval uses the same multipler across all feature classes. If I were to replicate my ENMeval model in maxent.jar (which I want to do so I can get MESS analyses and change clamping parameters), I would need to go to the experimental settings and change those values to my optimal regularization multiplier. Anyway, hope that helps anyone else who was confused.

Nolan

Angela P. Cuervo-Robayo

unread,
Feb 15, 2019, 10:01:29 AM2/15/19
to max...@googlegroups.com
Thanks!

FYI MESS is already in R. 

Best, 

--
You received this message because you are subscribed to the Google Groups "Maxent" group.
To unsubscribe from this group and stop receiving emails from it, send an email to maxent+un...@googlegroups.com.
To post to this group, send email to max...@googlegroups.com.
Visit this group at https://groups.google.com/group/maxent.
For more options, visit https://groups.google.com/d/optout.


--
Instituto de Biología, UNAM
Tercer circuito ext. s/n Ciudad Universitaria.
México, D.F
CP. 04510

Teléfono: (52) 56 22 91 69
Skype: angela.cuervo





Nolan Helmstetter

unread,
Feb 15, 2019, 11:11:55 AM2/15/19
to Maxent
Thanks, Angela

I'm just starting to dive into the Maxnet package. Do you know if it is possible to pass the MESS map, clamping, etc... arguments to ENMeval? I want to continue to use the "user defined" partitions for my location/background data. However, I also want to be able to view my lambda file, dictate how many iterations, and define clamping (e.g. fade by clamping, do not extrapolate). I may be asking for too much!

Nolan


Nolan Helmstetter

unread,
Feb 15, 2019, 11:23:12 AM2/15/19
to Maxent
Sorry, I forgot to state that I want to continue using maxnet (not the maxent.jar) because the hinge features vary between the two.

--Nolan

Jamie M. Kass

unread,
Feb 17, 2019, 7:13:17 AM2/17/19
to Maxent
Nolan,

The MESS map is usually created before you run the model, comparing the environmental values at the occurrence localities (and the background too if you'd like) with the environmental raster you intend to project to. You could run it on a model prediction, I suppose, but I'm not sure what that would tell you. You are mainly interested in how different the training values are from the raster used for projection. You can run it in R with dismo::mess(). You should be able to pass the clamping settings as additional parameters (named vector with boolean values) in the call to ENMevaluate() (the "..." parameter). An example would be "fadebyclamping=TRUE". Just put these parameters on the end of your ENMevaluate() call. The lambda file is viewable from the model object. If your model is called "m", the lambdas info would be in "m@lambdas".

Jamie Kass
Ph.D. Biology
City University of New York

Nolan Helmstetter

unread,
Feb 17, 2019, 2:51:01 PM2/17/19
to Maxent
Hi Jamie,

Thanks for the info. I figured I'd have to create a MESS map in different fashion, not a big deal. As for adding parameters at the end of the ENMevaluate() call, I had been trying to create arguments and pass those to it with no luck, didn't realize it was so easy!! Thanks!

I can't get the @lambdas argument to work, I believe it is because I'm using the 'maxnet' algorithm. I get this error:

model@lambdas
Error: no slot of name "lambdas" for this object of class "ENMevaluation"

I was really just curious to see which coefficients were shrunk to 0, thought that would be interesting to report. However, I did find a way to at least look at the betas (coefficients that weren't shrunk to 0) after running ENMevaluate() for whatever model you are interested in. Code below:

model<-ENMevaluate(...)
model_betas<-model@models[[which(model@results$rm==0.5)]] ##selecting the model in which a regularization multiplier of 0.5 was used for all feature classes
betas<-model_beats$betas
View(betas)

I'm assuming those are the final coefficients used in the model after being shrunk by the regularization multiplier.

Thanks again,

Nolan






Nolan Helmstetter

unread,
Feb 26, 2019, 8:45:18 PM2/26/19
to Maxent
Hey Jamie,

I ran three different models (see below) with different clamping options, but I am seeing no difference between the three when projected.

ENMevaluate(....clamp=TRUE,"fadebyclamping=FALSE"....)
ENMevaluate(....clamp=TRUE,"fadebyclamping=TRUE"....)
ENMevaluate(...clamp=FALSE,"fadebyclamping=FALSE"...)

When I project them using a logistic transformation all three have the exact same values for areas that are outside that have environmental variables outside training data (I created a MESS map to see where those areas existed).

Any thoughts? For all my models I used the same background and presence locations (user-defined partitions) selected from a buffered area around my location points (to account for sampling bias).  

Thanks,

Nolan


Jamie M. Kass

unread,
Feb 27, 2019, 5:41:19 PM2/27/19
to max...@googlegroups.com
Nolan,

First off, yes, I believe you are correct about the non-zero coefficients for the maxnet model.

About the clamping issue, turning off clamping, and likely other clamping settings do not work for dismo::maxent() because of an (unresolved) error in the dismo call to the Java program. This should not be an issue with maxnet, though. Can you confirm your model was created by maxent.jar?

Jamie

--
You received this message because you are subscribed to a topic in the Google Groups "Maxent" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/maxent/lJQDEyJIKoc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to maxent+un...@googlegroups.com.

Nolan Helmstetter

unread,
Feb 27, 2019, 7:15:47 PM2/27/19
to Maxent

Hey Jamie,

I am actually using the 'maxnet' algorithm because I read about the issue with 'maxent.jar' and the dismo package. I have tried a couple of options,

1) I fit and tuned the model on a smaller extent and then used the maxnet.predictRaster() to project onto a larger extent (this was done for efficiency). For this option, I either had maxnet.predictRaster(....clamp = TRUE...) or maxnet.predictRaster(.....clamp=FALSE....). I couldn't get "fadebyclamping" to run in this function.

2) I fit and tuned the model using the entire extent. For these processes, I used
ENMevaluate(....clamp=TRUE....), ENMevaluate(...clamp=TRUE, "fadebyclamping=TRUE"....), and ENMevaluate(....clamp=FALSE....). 

Everything runs fine with no errors, but the output rasters (raw or logistic transformations) all look exactly the same, even in areas that have environmental variables outside the training range (as determined by a MESS map).

For all of these, I used user-defined partitions with the same presence and background points. I just assumed there would be somewhat of a difference between the clamped, fadebyclamping, and the extrapolated (no clamp, no fade) rasters. 

Thanks,

Nolan

Jamie M. Kass

unread,
Feb 27, 2019, 7:50:55 PM2/27/19
to max...@googlegroups.com
Well that is strange. Would you be able to send me the data you're using in your models? Or at least a sample of it? I can do some testing to see if I can figure out what's wrong. 

Jamie

Nolan Helmstetter

unread,
Mar 1, 2019, 1:11:41 PM3/1/19
to Maxent
Hey Jamie,

That shouldn't be a problem. Let me gather everything together and I can get it to you. You can e-mail me at helm...@vandals.uidaho.edu and let me know where/how you'd like me to send it.

Thanks,

Nolan
Reply all
Reply to author
Forward
0 new messages