Calibration Versus Validation model efficiency

172 views
Skip to first unread message

Negash Wagesho

unread,
Apr 23, 2011, 1:53:49 PM4/23/11
to swat...@googlegroups.com

Dear ArcGIS SWAT and SWATCUP2 users,


I have few absurd in calibration and validation phase of SWAT model.  The calibration step come out with suitably sufficient model efficiency while the validation phase  is characterized with significantly low model efficiency values.   I am tracing the root cause but could not able to figure them out rightly. Is it due to the fact that the calibration phase which brings sufficiently good model result is  not representative of the catchment property or there is an inherent problem in the model to generate the validation phase output? Pls kindly clear such vague in due course and suggest the appropriate step I  am supposed to follow ?
Regards,

______________________________________
Negash Wagesho A.
Research Scholar , Department of Hydrology
P.O.Box 247 667 , KIH - 072
IIT, Roorkee

John Joseph

unread,
Apr 25, 2011, 6:08:49 AM4/25/11
to Negash Wagesho, swat...@googlegroups.com

Hi, Nagesh.

 

Two possible reasons are as follows.

 

1.        Overparameterization.  This happens when you adjust too many parameters for the amount of observed data you have during the calibration period. As an extreme example, suppose you calibrate at the monthly time step, and your calibration period is only two years, but you have 12 parameters in autocalibration.  Well, then you could probably get a very high model efficiency for the calibration period because so many parameters with so few data points means that some of the parameters are likely adjusted to account for random effects (instead of actual hydrologic dynamics) during the calibration period.  Then when you try the validation period your model efficiency will probably drop dramatically.  I don’t know what the “rule of thumb” would be to prevent overparameterization.  At the daily time scale, would it be 20 days for each parameter?…to me that doesn’t seem strict enough, especially for arid or semi-arid basins.  How about 5 peaks in the observed discharge for each parameter?  That seems more reasonable to me, but I can’t claim to have tested this.  I don’t know of any “rule of thumb”. 

2.       Change in precipitation.  If the precipitation record for the calibration period is a lot different from that of the validation period, then your model efficiency might also drop.   I think the model developers recommend at least three years of calibration data, but more years include both wet years and dry years

 

There are other possibilities, such as changes in management practices. 

 

John Joseph

--
You received this message because you are subscribed to the Google Groups "SWAT-user" group.
To post to this group, send email to swat...@googlegroups.com.
To unsubscribe from this group, send email to swatuser+u...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/swatuser?hl=en.

John Joseph

unread,
Apr 25, 2011, 6:12:36 AM4/25/11
to John Joseph, Negash Wagesho, swat...@googlegroups.com

Oops.  I meant to say at the end of 2, “I think the model developers recommend at least three years of calibration data, but would recommend more years if needed to include both a wet year and a dry year.”

John Joseph

 

From: John Joseph [mailto:josep...@att.net]
Sent: Monday, April 25, 2011 5:09 AM
To: 'Negash Wagesho'; 'swat...@googlegroups.com'
Subject: RE: [SWAT-user:2748] Calibration Versus Validation model efficiency

 

Hi, Nagesh.

 

Two possible reasons are as follows.

 

1.        Overparameterization.  This happens when you adjust too many parameters for the amount of observed data you have during the calibration period. As an extreme example, suppose you calibrate at the monthly time step, and your calibration period is only two years, but you have 12 parameters in autocalibration.  Well, then you could probably get a very high model efficiency for the calibration period because so many parameters with so few data points means that some of the parameters are likely adjusted to account for random effects (instead of actual hydrologic dynamics) during the calibration period.  Then when you try the validation period your model efficiency will probably drop dramatically.  I don’t know what the “rule of thumb” would be to prevent overparameterization.  At the daily time scale, would it be 20 days for each parameter?…to me that doesn’t seem strict enough, especially for arid or semi-arid basins.  How about 5 peaks in the observed discharge for each parameter?  That seems more reasonable to me, but I can’t claim to have tested this.  I don’t know of any “rule of thumb”. 

2.       Change in precipitation.  If the precipitation record for the calibration period is a lot different from that of the validation period, then your model efficiency might also drop.   I think the model developers recommend at least three years of calibration data, but more years include both wet years and dry years

 

There are other possibilities, such as changes in management practices. 

 

John Joseph

 

From: swat...@googlegroups.com [mailto:swat...@googlegroups.com] On Behalf Of Negash Wagesho
Sent: Saturday, April 23, 2011 12:54 PM
To: swat...@googlegroups.com
Subject: [SWAT-user:2748] Calibration Versus Validation model efficiency

 


--

priyantha jayakody

unread,
Apr 25, 2011, 10:00:54 AM4/25/11
to Negash Wagesho, swat...@googlegroups.com
once i had the same problem, i found my Rainfall stations had more missing data points and some stations had absurd values too. so i suggest you to check your rainfall stations, specially during calibration period.
you also need to put correct information in management module if  you have crop lands

hope this will help you some what



Reply all
Reply to author
Forward
0 new messages