SWATCUP 95% Prediction uncertainty question

1,269 views
Skip to first unread message

Balaji

unread,
Mar 10, 2014, 7:49:35 AM3/10/14
to swat...@googlegroups.com
Dr. Karim,

I am trying to calibrate for total nitrogen loads in my watershed. I selected 15 model parameters and using SUFI2 and ran 1000 simulations (we have the parallel processing license). I had used NSE as the objective function, but I was also interested in keeping the PBIAS low (within 10%). Since I cannot optimize for two objective functions in SWATCUP,  I used the extracted reach values (from the SUFI2.OUT folder) to calculate the goodness of fit statistics using R. It turned out that the simulation with the highest NSE value had a PBIAS much higher than 10%. So I picked the simulation with the highest NSE among the simulations that had a  PBIAS less than 10% as my best simulation. 

My first question is - For the simulation I picked, are the 95PPU values (in the 95ppu.txt) still valid? What will be the "M95PPU" value for the new simulation?

For the next run, I set the model parameters to those from the best simulation (the one with good NSE and PBIAS). I made another run (this time I only made 30 runs) using only the most sensitive parameter and using the new range for that parameter(from the new_pars.txt from the previous run). I did the same analysis using R and picked the simulation with the best value for both NSE and PBIAS, as my calibrated model. 

My second question is - Should I just use the values from the 95PPU.txt file for the second run as the 95% prediction uncertainty for my calibrated model? 

Also, can you tell me how the 95PPU values (L95PPU, M95PPU and U95PPU) are calculated? I read the manual and it says that XL and XU are the 2.5th and 97.5th percentiles of the cumulative distribution of every simulated point, but I am unable to reproduce the 95PPU values generated by SWATCUP.

Thank you in advance for your help.

Balaji

Karim Abbaspour

unread,
Mar 10, 2014, 8:14:42 AM3/10/14
to swat...@googlegroups.com
Dear Balaji,
The point I tried to make several times is that we should not look at the best simulation. In fact, I am seriously thinking about removing the best simulation all together but I am afraid many people will be uncomfortable with that. The point is that because of the uncertainties, our final solution is given by the 95PPU band and the final parameter intervals. What we call the "best" simulation is just one of many acceptable solutions. It may easily change if you take slightly different intervals for the parameters. It is just best in that iteration with those parameter intervals and does not mean anything more than that. So, with that in mind, any solution within the 95PPU is an acceptable solution. If you really need to provide a single set of parameters as the solution (which I strongly disagree), then you can take any simulation  in the 95PPU band, otherwise the range of parameters and the 95PPU band is your solution. So, a short answer to your question is yes, the 95PPU is still representative of the uncertainty no mater which simulation you take.
For 95PPU, as described in the manual, you calculate the cumulative distribution of all your simulations, then calculate the values at 2.5% and 97.5% marks of the cumulative distribution. If you calculate it with R, there maybe very slight dependencies due to rounding of the numbers, but they should be quite similar.
My final comments is that please ignore the best simulation idea. The final parameter ranges giving you acceptable P_factor and R_factor values are your solution and the results is expressed as the 95PPU graph.
Keep in mind that the uncertainty in your model prediction is given by the 95PPU band and quantified by P_factor and R_factor. The %error in you model is given by (1-P_factor) as these are the data points not captured or accounted for by the 95PPU.
Hope this is clear.
Best wishes,
Karim





 
 
-------------------------------------------------
Dr. K.C. Abbaspour
Eawag, Swiss Federal Institute for Aquatic Science and Technology
Ueberlandstr. 133, P.O. Box 611, 8600 Duebendorf, Switzerland
email: abba...@eawag.ch
phone: +41 44 823 5359
fax: +41 44 823 5375
http://www.eawag.ch/index_EN


--
You received this message because you are subscribed to the Google Groups "SWAT-CUP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to swat-cup+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


mano...@gmail.com

unread,
Mar 26, 2014, 11:28:24 PM3/26/14
to swat...@googlegroups.com
Dear Abbaspour,

My query is related to the least number of model parameters that can be used for calibration. I started with 14 parameters (flow component) with 300 runs. After the first iteration I selected 6 model parameters on the basis of ranking and changed/modified the parameter ranges as suggested. After a couple more iteration I achieved good result in terms of 95 PPU band. And in the final iteration I chose a single most sensitive model parameters (GW-revap), as suggested by the ranking in the previous iteration and then made an iteration with 50 runs. It came out to give equivalent result as with 6 optimised parameters for both calibration and validation. I wonder which result to focus on (result with 5 optimised parameters or 1 optimised parameter)?

Thanks,
Manoj
Reply all
Reply to author
Forward
0 new messages