Main Landing Website
Main Documentation (I usually recommend the first 20 pages and Appendix 4 for new users):
I put together a FMP template, which is the package that handles precipitation arrays.
The header of the template talks about input structure and then follows the input bocks/descriptions (pretty much a summary of Appendix 6 of the tm6a60 report).
There are two versions of the header, one that is a regular text file and another that is html so you can see it with syntax highlighting in a web browser.
It only includes a simple use of precipitation but covers all the major features on setting up an OWHM model.
However, you may want to wait until ModelMuse support of OHWMv2 is approved for release.
----------------------------------------------------------------------------------------------------
For running the NWT solver in MODFLOW-OWHM, you have to disable the additional convergence metrics in OWHM to get the same runtime speed. OHWM added something called the Relative Volume Error that checks
against each cells residual error divided by the volume of that cell. If any cell violates this limit, convergence is disabled even though RCLOSE and HCLOSE are satisfied. This was added to improve mass balances in the final solution. The two metrics that
MODFLOW uses for convergence HCLOSE and RCLOSE end up in a delicate balance between being too small resulting in some time steps failing to converge or two large resulting in mass errors. We found in practice that the Relative Volume Error ends up improving
the likelihood of time steps converging and lowering the mass balance. It's not a replacement for HCLOSE/RCLOSE but is just another requirement for a time step to be considered converged.
Regrettably the way MODFLOW is designed makes it difficult to calculate the mass balances during each solver iteration and making that a convergence criterion. We tried to add that as a option, but found it was going
to require substantial refactoring of the base code, so it was abandoned in favor of relative volume errors.
There are some other minor differences in how OWHM handles the solvers under the hood that have a minor performance hit but provide the user with more information. For example, it is not compiled with
as aggressive of compiler optimizations in order to provide Fortran info if there is a fatal error. There also a lot of additional warnings and catches that the code itself try's to intercept and help the user. Another biggie is that it calculates the mass
errors for every time step, irrelevant of the Output Control (OC). Traditional MF only calculates the mass error if either the OC requests (I think its PRINT LIST) or if a time step fails to converge. OWHM checks it's for every time step and if it exceeds
5% raises a warning (the threshold can be changed with a keyword, check out all the BAS options at:
https://code.usgs.gov/modflow/mf-owhm/-/blob/main/doc/Option_Block_Cheatsheets/BAS_Options_Recommended.bas). We added that feature when we realized that often time steps would converge, but had a terrible mass error and the user never knew unless they requested
it in the OC.
One thing to point out, UPW (required by NWT) does give a slightly different solution compared to other solvers and LPF. I talk about that in the following post. The difference becomes exacerbated for large Specific Storage values as was demonstrated in the
example here:
https://groups.google.com/g/modflow/c/29BH7rgNnd0/m/HVpNuihfGQAJ
For the parallel question, not sure what you mean by that. Solving groundwater is inherently serial in nature. How people solve models in parallel is running multiple, independent models as in done in PEST.
We played a bit with parallel for reading the stress period input and solving the budgets at the end of the time step, but unfortuntely (at the time we tryed it) resulted in negligable improvements compared to the
loss of an extra thread/cpu for running PEST on.
One thing that does help a lot with speed is buffering the in and output files. All files open in OWHM support the post-keyword BUFFER followed by the buffer size in kilobytes.
For example in the name file you can have something like:
LIST 55 list.txt BUFFER 1024
which will write the list file to a 1 megabyte buffer and when its full write it to a file. This has the effect of writing large files like the LIST in 1MB chunks. For our models, we found we got about a 5% speed bump
by doing this for large files like the Cell by Cell and LIST. The downside is that if there is a loss of power to the computer, the buffer is lost (cuz it never gets written out). We did find a buffer greater than 1024 does not improve speed and only waists
ram. All files by default buffer at 32kb.
For input files, they are pre-read at the buffer size.
For example,
GHB 56 ghb.txt BUFFER 64
if ghb.txt is 50kb in size, then the entire file is loaded into RAM at the start and the buffer is read instead of the actual file.
Hope that helps all out,
Scott