Hi Tam,
I have a similar question. If I am running 2000 latin hypercube samples for calibration on 30 cores, I can then find the set of parameters associated with the best model accuracy. Let's say that overall, the best parameter set is associated with simulation/parameter set 100. which happens to be the 5th simulation run on core 3 (for example). If I then want to generate a new best model, my initial thought would be to apply that best parameter set to update the original model. Doing this, and running the new updated model, produces output values that are very nearly the same as the outputs associated with the calibration associated with parameter set 100. The difference between the two seems like it is small enough it could be attributed to a rounding error.
However, I was doing some reading, and it seems like the process might be more iterative if you are using multiple cores? For example, does the calibration process copy the original model to the core once, and then, for the first run, apply parameter changes to that original model, generating a new model. And then the next iteration in that core modifies the new model, and so on and so forth. If I try to update the original model using these more iterative procedures, then it really doesn't seem to match the outputs from the original calibration. But maybe I have a coding error.
Sorry if that is confusing to write!
The relevant part of the code, I think is:
# 16. Set first run is true so R-SWAT will delete previous simulation
firstRun <- TRUE
copyUnchangeFiles <- TRUE
# 17. Get content of the file.cio file (about simulation time)
fileCioInfo <- getSimTime(TxtInOutFolder)
# 18. Now start to run SWAT
runSWATpar(workingFolder,TxtInOutFolder,outputExtraction,ncores,SWATexeFile,
parameterValue,paraSelection,caliParam,copyUnchangeFiles,fileCioInfo,
dateRangeCali,firstRun, outputFun)