I hope this message finds you well.
I have utilized the sensitivity analysis module within the SWAT+ Toolbox, employing the Sobol method with 1,000 iterations. Based on the results, I selected seven parameters for automatic calibration using the DDS algorithm, also configured with 1,000 iterations. However, the simulation terminated unexpectedly before completing the full set of iterations.
My calibration process is based solely on observed outflows from a single river gauging station.
I would appreciate your guidance on the following two questions:
1. Among the available automatic calibration methods in the Toolbox—DDS, CALSI, and DREAM—which one is generally considered the most robust or effective under conditions similar to mine? Additionally, could you please clarify the recommended parameter settings for each method, particularly when calibration is limited to streamflow data?
2. What are the typical causes of crashes during high iteration counts in the calibration process, and what steps can be taken to prevent such issues in future runs?