Hi all,
I hope you're doing well. I apologize for the basic question, but I would really appreciate your help.
I ran a number of simulations using my model in R-SWAT. Then, I extracted the best parameter values using the function "Display table of behavioral parameter ranges."
After that, I took those best parameter values and inserted them as min and max values for each parameter, and ran a single simulation (with a narrow range, essentially fixing the parameters). But the results were disappointing — the NSE actually decreased, and the fit was worse than before.
This approach works well in SWAT-CUP when using the "fitted values test" option, so I was wondering:
Why doesn’t this strategy give better results in R-SWAT? Is the behavior of parameter fixing or interpretation of behavioral ranges different in R-SWAT compared to SWAT-CUP?
Thanks in advance for any clarification or advice.
Best regards,
In the context of calibrating my watershed with eight streamflow stations, I observed a key difference between how SWAT-CUP and R-SWAT handle and select the best parameter sets. In SWAT-CUP, at the end of the calibration process, the software provides a “best simulation,” which corresponds to the run that maximizes the overall objective function (typically the average or sum of the NSEs across all eight stations). However, this best global simulation does not necessarily mean that each individual station achieved a good NSE. In fact, the model is optimized for a global compromise, which can hide poor performance at some locations. That’s why, when extracting the parameters from this best simulation and applying them in a single-run scenario (by setting min = max for each parameter), the resulting NSE values are often lower for some stations—because those parameters were not specifically selected to maximize local performance.
On the other hand, R-SWAT allows you to analyze results in more detail through exported Excel files. After calculating the objective function, you can observe that each variable (each station) has its own simulation that achieved the highest NSE. This means that each station has a different optimal parameter set, coming from different simulation runs. Moreover, the "Display Table of Behavioral Parameter Ranges" only shows the parameters of the globally best simulation, not the best simulations for each station individually. That’s precisely why using this single parameter set can result in poor performance for some variables when applied across the entire basin.
To address this, I applied a different approach. I identified, for each station, the simulation that gave the highest NSE. From those eight simulations, I extracted the parameter values and, for each parameter (e.g., CN2), determined the range across all eight best-performing simulations. For example, CN2 varied between -0.128 and 0.17. I then created new calibration intervals using the minimum and maximum values from this range. With these tailored intervals, I re-ran the calibration process. The result was significantly improved NSE values across all stations, because the new parameter ranges better reflected the local needs of each sub-basin, rather than forcing a single global compromise.
What do I think about this approach, Mr Tam ?