Streamflow is below 1 m3/s

124 views
Skip to first unread message

Martin Nguyen

unread,
Aug 20, 2025, 9:03:46 PMAug 20
to wrf-hydro_users
Hi team,

I am trying to simulate the streamflow for an area in New Zealand. Could you please help me with the issue as below?

The streamflow (from CHANOBS) is extremely smaller (below 1 m3/s) than I expected (15 - 20 m3/s). I have checked here, which has similar question. So, I tried to check the basin, it is about 295 km2. I also checked the FLOWACC and CHANNELGRID, and as I saw they all matched the reality. The resolution of DEM is 4m and of CHANNELGRID is 20m. The LDASIN_DOMAIN1 data I collected from ERA5 as instructed here. As I checked the RAINRATE after regridding, I think they all look fine.

I have attached here:
- a plot of rainrate and streamflow (hourly).
- namelist.hrldas, hydro.namelist, and namelist.wps.
- routing_stack_examine.zip: Results from Examine_Outputs_of_GIS_Preprocessor.py so you can check the Routing stack.
- a LDASIN_DOMAIN1 input as an example of FORCING data

Other data such as full FORCING (which is LDASIN_DOMAIN1) and the code I wrote to generate it, and observed streamflow data are either too big or confidential. I can provide them later if necessary.

Please let me know if you need further information. Thank you so much for your time and consideration.

Kind regards

Martin Nguyen

hydro.namelist
Rainrate_Streamflow.jpg
routing_stack_examine.zip
namelist.wps
namelist.hrldas
1999011508.LDASIN_DOMAIN1
other_necessary_inputs.zip

Arezoo RafieeiNasab

unread,
Aug 22, 2025, 11:58:17 AMAug 22
to wrf-hyd...@ucar.edu
Hi Martin, 

I checked the namelist of yours and it seems you are doing cold start simulations for only 2 months. I would not trust such simulations, you want to make sure you do a proper spinup (model warm up) before you start verification of the model. A brief explanation can be found here: https://wrf-hydro.readthedocs.io/en/latest/appendices.html#a15-restart-files-overview 

Also to make sure this is not caused by the forcing dataset, you could change the FORC_TYP to 4 ( https://wrf-hydro.readthedocs.io/en/latest/model-inputs-preproc.html#specification-of-meteorological-forcing-data). This will be a 1 in/hr rainfall and could be an initial test to make sure the problem is not the forcing dataset. 

FYI, the Fulldom_hires.nc file was missing from your datasets, so I could not run it on my end to confirm and verify. 

Thanks!
Arezoo

--
You received this message because you are subscribed to the Google Groups "wrf-hydro_users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wrf-hydro_use...@ucar.edu.
To view this discussion visit https://groups.google.com/a/ucar.edu/d/msgid/wrf-hydro_users/1a5114d9-608b-4150-9259-dd3ca873c154n%40ucar.edu.


--
Arezoo Rafieei Nasab, Ph.D.
NCAR/RAL Project Scientist II

Martin Nguyen

unread,
Aug 25, 2025, 8:08:58 AMAug 25
to wrf-hydro_users, Arezoo RafieeiNasab
Hi Arezoo,

Thank you so so much for your reply. I really appreciate it!

I have tried longer time (from 1998/09/01 to 1999/03/01) and with the FORC_TYP is 4, unfortunately the streamflow (after spinup) is still lower than 1 m3/s (I guess the cause might not be from the FORCING dataset?). I have put all the files in vers_003_streamflow. Please let me know if you cannot access to it (I have sent you the invitation to this folder already). The link includes:

- domain_inputs: geo_em.d01.nc, hydro2dtbl.nc, wrfinput_d01.nc, and soil_properties.nc, namelist.wps
- routing_stack: whirinaki_test_001 (folder, .zip, and .log) and outputs_001 (from Examine_Outputs_of_GIS_Preprocessor.py)
- simulation: *.TBL files, namelist.hrldas, hydro.namelist, DOMAIN, and FORCING folders. FORCING dataset is within roughly 6 months (from 1998/09/01 to 1999/03/01) and still uploading (if you have not seen them all). 
- observed streamflow

Apart from that, my FORCING dataset is generated from ERA5 reanalysis dataset based on here and I have not uploaded my simulated results here as it is too big.

Please help me with some of your advice and let me know if you need further information.

Kind regards,

Martin Nguyen

Martin Nguyen

unread,
Aug 29, 2025, 2:09:38 AMAug 29
to wrf-hydro_users, Arezoo RafieeiNasab

Hi Arezoo,

I'm just following up the earlier email regarding the low streamflow results. I provided here additional two plots: RAINRATE (regridded from ERA5) and streamflow (from WRF-Hydro simulation) within the longer forcing period (1998/09/01–1999/03/01) and set FORC_TYP = 1. I also added a file detailing how I converted ERA5 to WRF-Hydro forcing dataset based on here in vers_003_streamflow. Please let me know if you cannot access to this. I also provided in this email necessary data (just in case you cannot access and still need some basic information). The file "necessary_data.zip" includes:

- namelist.hrldas, hydro.namelist, and namelist.wps.
- routing_stack_examine.zip: Results from Examine_Outputs_of_GIS_Preprocessor.py so you can check the Routing stack.
- a LDASIN_DOMAIN1 input as an example of FORCING data
- Other necessary inputs: geo_em.d01.nchydro2dtbl.ncsoil_properties.nc, and wrfinput_d01.nc

Please let me know if you need further information.

Kind regards

Martin Nguyen
regridded_rainfall_from_ERA5.PNG
wrfhydro_streamflow.PNG
necessary_data.zip

Arezoo RafieeiNasab

unread,
Aug 29, 2025, 11:26:53 AMAug 29
to Martin Nguyen, wrf-hydro_users
Hi Martin, 

I have got your files, I just did not get to doing any simulations. I will try to do it this weekend and get back to you with some comments.  

Thanks!
Arezoo

Martin Nguyen

unread,
Aug 29, 2025, 11:59:33 PMAug 29
to wrf-hydro_users, Arezoo RafieeiNasab, wrf-hydro_users

Hi Arezoo,

Thank you so much for letting me know. I really appreciate your time and help with this. I look forward to your comments.

Kind regards

Martin Nguyen

Arezoo RafieeiNasab

unread,
Sep 1, 2025, 2:25:37 AMSep 1
to Martin Nguyen, wrf-hydro_users
Hi Martin, 

First, thanks for being patient! I finally got a chance to look at your domain, and re-do the simulation. I did not change anything in your namelists, or domain file and just ran it. And when I look at the CHANOBS files, I see reasonable values:                        filename        gage1      gage2
1  199809010000.CHANOBS_DOMAIN1 0.0000000000  0.0000000
2  199809010100.CHANOBS_DOMAIN1 4.5275559425  0.2070884
3  199809010200.CHANOBS_DOMAIN1 3.2711133957  0.2319880
4  199809010300.CHANOBS_DOMAIN1 0.5948514938  0.2128190
5  199809010400.CHANOBS_DOMAIN1 0.1581338048  0.2111183
6  199809010500.CHANOBS_DOMAIN1 0.0579994358  4.0540438
7  199809010600.CHANOBS_DOMAIN1 0.0262441989 34.6481094
8  199809010700.CHANOBS_DOMAIN1 0.0137332408 36.1834831
9  199809010800.CHANOBS_DOMAIN1 0.0079984162 37.7559052
10 199809010900.CHANOBS_DOMAIN1 0.0050479714 49.1916389
11 199809011000.CHANOBS_DOMAIN1 0.0033928014 45.7228470
12 199809011100.CHANOBS_DOMAIN1 0.0023942934 49.0258675
13 199809011200.CHANOBS_DOMAIN1 0.0017573087 45.9198265
14 199809011300.CHANOBS_DOMAIN1 0.0013314807 37.4373512
15 199809011400.CHANOBS_DOMAIN1 0.0010353369 29.0855026
16 199809011500.CHANOBS_DOMAIN1 0.0008222427 22.5244408
17 199809011600.CHANOBS_DOMAIN1 0.0006645433 17.5766487
18 199809011700.CHANOBS_DOMAIN1 0.0005442509 13.8718758
19 199809011800.CHANOBS_DOMAIN1 0.0004498037 11.0828457
20 199809011900.CHANOBS_DOMAIN1 0.0003738912  8.9588127

AT one of the locations, the streamflow value goes all the way up to 45 cms. I also quickly checked at the CHRTOUT files for the full domain, and streamflow goes all the way up to 700 cms for one location within the domain. However, you mentioned the streamflow values are staying below 1 cms, this is not consistent. Could you confirm whether you get similar values with FORC_TYP=4 which was the option in your simulation folder. 

Thanks!
Arezoo

Martin Nguyen

unread,
Sep 1, 2025, 9:48:33 AMSep 1
to Arezoo RafieeiNasab, wrf-hydro_users
Hi Arezoo,

Thank you so much for your response. I really appreciate it!

I'm not sure if I understand here correctly, but as I understand, the data you showed is within the spin-up time, and what I mentioned is after the spin-up time (after the peak of streamflow at the beginning period of time), focusing on gage 1 (I don’t have observed data for gage 2 to compare). So after the spin-up time, the streamflow values stay below 1 cms, except for one time around 17–18 cms (in December) when the rainfall was extremely high. According to the observed data I have, I would expect streamflow to be around 2-17 cms within January, but the simulation remains below 1 cms.

I have visualized this in the attached PDF (for FORC_TYP = 1), showing the spin-up period, the low streamflow values, and the period where I expect them to match the observed data. The simulation does show the same patterns, but the magnitude is lower than 1 cms. Please zoom in to see the values clearer. I also visualised for gage 2.

For the FORC_TYP = 4, I also have the same behaviors, after the spin-up time, the streamflow for the gage 1 is lower than 1 cms. I also visualised it in the attached PDF (but I only run it to around October).

Please let me know if you need further information.

Kind regards,

Martin Nguyen

streamflow_below1cms_explanation.pdf

Arezoo RafieeiNasab

unread,
Sep 2, 2025, 1:06:23 AMSep 2
to Martin Nguyen, wrf-hydro_users
Hi Martin, 

With the FORC_TYP = 4, we have just one pulse of rainfall at the start of the simulation, and you would need to look at the results a few hours after the start of the simulation. This is an idealized situation to make sure the model works which in your case, I think it is. Rainfall is happening in the first hour, and then you could see the water being routed in the rivers in the course of a few hours. Since there is no rainfall after the first hour, the streamflow will go back to the baseflow after a few hours, which in your case is small values. In this case, the FORCING folder of yours is not used. 

To clarify, spin up refers to a long term model simulation to reach an equilibrium for the quantity of interest, I usually run a model for multiple years (the more the better) to reach that state. What you are highlighting as spin up is just too short to call spin up, this is more of an event. Correct me if this is not correct and you have a long term spin up. 

Another point, if you like to see the impact of initial condition, you could change the soil moisture content at the beginning of your simulation, and see the impact. If you want to try it, double the variable SMOIS which is the soil moisture content at initial time.  

ncap2 -s "SMOIS=SMOIS*2" wrfinput_d01.nc wrfinput_d01_new.nc

I Hope this is helpful. Thanks!
Arezoo

Martin Nguyen

unread,
Sep 3, 2025, 1:03:40 AMSep 3
to Arezoo RafieeiNasab, wrf-hydro_users
Hi Arezoo,

Thank you so much for your clear explanation and helpful suggestions. I really appreciate it.

I haven't run a long-term spin up (longer than 6 months) yet, so I'll set up a ~ 10-year run to see how it goes, and also try adjusting the SMOIS variable as you suggested. I also came across the calibration process in your slides and will explore the PyWrfHydroCalib package.

If I still run into issues with low streamflow, I'll follow up here - otherwise, your guidance has already helped me resolve the main problem.

Thanks again for your time and support.

Kind regards

Martin Nguyen

Arezoo RafieeiNasab

unread,
Sep 3, 2025, 1:59:45 AMSep 3
to Martin Nguyen, wrf-hydro_users
Hi Martin, 
 
I am not suggesting that the low flow values for sure will disappear after including spinup, rather trying to say that you would need some spin up before you start evaluating the model. And yes, if there is no problem with the forcing files and they are good quality, and you would like to go ahead with the model configuration of your choice (for example, Gridded routing vs. reach based routing) etc, next step would be to calibrate the model to tune the parameters. We usually do calibration after performing baseline simulation and making sure all other pieces are working (forcing, model code, etc.). We have documentation for model calibration, so hopefully you could follow that when the time comes. 

Good luck!
Arezoo

Martin Nguyen

unread,
Sep 4, 2025, 1:05:19 AMSep 4
to Arezoo RafieeiNasab, wrf-hydro_users
Hi Arezoo,

Thank you again for your clarification earlier. I am currently running the model and waiting for results, but I want to better understand how to check the rainfall partitioning in the outputs.

Essentially, I would like to trace where the rainfall goes once it enters the model — whether it becomes streamflow, groundwater, soil moisture, or other components like ET. For a simple sanity check, I was thinking of comparing:

Rainfall ≈ Streamflow + Groundwater + Soil Moisture (+ smaller terms like ET).

  • Does this sound like a reasonable first check?

  • Which specific WRF-Hydro output variables/files should I use to track streamflow, groundwater, and soil moisture? For example, I can see groundwater inflow in GWOUT_DOMAIN, but I cannot find SOIL_M in LDASOUT_DOMAIN1. Where should I look for soil moisture in the outputs?

On calibration, I’ve found the slides and codes that I mentioned earlier. When you mentioned “documentation,” did you mean additional materials beyond those?

Kind regards

Martin Nguyen

aubrey

unread,
Sep 5, 2025, 6:41:59 PMSep 5
to wrf-hydro_users, Martin Nguyen, wrf-hydro_users, Arezoo RafieeiNasab
Hi Martin:
The exact water budget variables depends on your configuration. In general terms, I use:
Precip - ET - change in Soil Moisture - change in SWE - change in Canopy Store - change in Surface Head - change in GW Storage - streamflow = 0
(this assumes change in storage in the channel is small... you can also use the channel inflow components to avoid this)

Precip can be calculated from the LDASIN files
ET, change in soil moisture, SWE, and canopy store can all be calculated using LDASOUT files
change in surface head can be calculated from sfcheadsubrt in RTOUT files if you have overland flow activated
change in GW store can be calculated using depth in GWOUT files if you are using one of the conceptual baseflow models
streamflow can be pulled from the CHRTOUT files

I generally convert all to depths in mm before calculating water budget closure.

Hope that helps.

Thanks!
Aubrey

Martin Nguyen

unread,
Sep 10, 2025, 8:26:43 AMSep 10
to aubrey, wrf-hydro_users, Arezoo RafieeiNasab
Hi Aubrey,

Thank you so much for your help. I am working on extracting the necessary variables to calculate the water balance to track the water as your instructions. I'll keep you updated.

Kind regards

Martin Nguyen

Martin Nguyen

unread,
Sep 11, 2025, 9:04:42 AMSep 11
to aubrey, wrf-hydro_users, Arezoo RafieeiNasab
Hi Aubrey and Arezoo,

I am running my model for 5 and 10 years to validate streamflows (still ongoing). For example, in the 5-year case, I plan to use the first year as spin-up, the next 3 years for sensitivity testing/calibration, and the last year for validation.

At the moment, the model runs quite slowly (about 5 minutes of wall time per 5 hours of simulation). I tried increasing the number of processors with -np 2, 4, 16, and 32, but the runtimes remain about the same. Could you please confirm whether I am approaching parallelization correctly, or if there are other ways to speed up the simulation? Here is an example of the command I used:

"mpirun -np 2 ./wrf_hydro_NoahMP.exe >> run.log 2>&1"

My machine has 32 cores, 64 logical processors, 242 processes, 7420 threads, and 192 GB of RAM. For the domain information, I have uploaded the case in the folder vers_003_streamflow. Please let me know if you need further details.

I am also working on the water balance separately and will reach out if I need guidance.

Kind regards,

Martin Nguyen

Arezoo RafieeiNasab

unread,
Sep 11, 2025, 12:57:53 PMSep 11
to Martin Nguyen, aubrey, wrf-hydro_users
Hi Martin

I cannot test it on my end quickly as the queuing system is pretty slow on our end today. However, I looked at a run that I did to do testing before for you, and when I used 64 cores, and it ran 11 days in 4 minutes. So somehow your run is way slower than mine. Are you sure the parallelization is working? The command is the same as what I use also. Do you see "diag_hydro.XXXX" in the simulation directory? There should be one per core you use. That is a quick way of checking whether a model ran with the number of cores you requested. 

Thanks!
Arezoo 

Martin Nguyen

unread,
Sep 12, 2025, 8:13:33 AMSep 12
to Arezoo RafieeiNasab, aubrey, wrf-hydro_users
Hi Arezoo,

Thank you very much for pointing out the diag_hydro.XXXX files. That helped me confirm that the parallelization in my earlier WRF-Hydro runs was not working correctly. I recompiled the code, and now the parallelization works as expected—I see all cores at 100% usage and 64 diag_hydro.XXXX files when running with 64 cores.

However, the runtime is still much slower compared to yours. For example, my setup takes about 35 minutes to simulate 3 days, whereas yours runs 11 days in about 4 minutes. For reference, here are some details about my configuration:

* System: 
I’m running WRF-Hydro under WSL (Windows Subsystem for Linux), so the code executes in Linux while outputs are accessible in Windows.

* namelist.hrldas
FORCING_TIMESTEP = 3600
NOAH_TIMESTEP = 3600
OUTPUT_TIMESTEP = 3600

* hydro.namelist
TERRAIN_ROUTING = 30
CHANNEL_ROUTING = 30
io_config_outputs = 0 (I have changed from 5 to 0 to check all outputs and I also did not find any difference in the speed)

I’ve attached the <Output> folder (vers_003_streamflow) that includes the run.log and the list of diag_hydro.XXXX files for your reference.

Do you think the performance difference could mainly be due to hardware differences, or could running through WSL be limiting the speed? I’d really appreciate your advice, and please let me know if you need further information.

Kind regards,

Martin Nguyen

Arezoo RafieeiNasab

unread,
Sep 12, 2025, 11:50:44 AMSep 12
to Martin Nguyen, Ryan Cabell, aubrey, wrf-hydro_users
Hi Martin, 

Unfortunately, this is a question that I cannot answer. Maybe our software engineer has some ideas, I have CC'd him here. 

Thanks!
Arezoo
--
-------------------------------------------------------------------------------------------------
My working day may not be your working day. Please do not feel obliged to reply to this email outside of your normal working hours.
-------------------------------------------------------------------------------------------------
Arezoo Rafieei Nasab, Ph.D.
Project Scientist II
NCAR Research Applications Laboratory

Ryan Cabell

unread,
Sep 12, 2025, 12:00:58 PMSep 12
to Martin Nguyen, Aubrey Dugger, wrf-hydro_users, Arezoo RafieeiNasab
Hi Martin,

Looking at your list of system specs below, I would recommend running with no more than 32 cores. The WRF-Hydro model (like many numerical models) uses the entire core and running with 64 hyper threads (logical processors) will overcommit the system and lead to slowdowns. It’s also been our experience that WSL has very slow IO so you may be able to speed things up a little by reducing the output frequency from WRF-Hydro.

Hope that helps,
Ryan


----------------------------------------------------------
Ryan Cabell

Deputy Program Director for Engineering
Hydrometeorological Applications Program
Research Applications Laboratory
National Center for Atmospheric Research

rca...@ucar.edu - 303.497.2880

Aubrey Dugger

unread,
Sep 12, 2025, 12:06:40 PMSep 12
to Ryan Cabell, Martin Nguyen, wrf-hydro_users, Arezoo RafieeiNasab
Would running through Docker help at all? Or is it a hardware limitation? I am not familiar with WSL.

Aubrey
--
-----------------------------------------------------------
Aubrey Dugger
NCAR Research Applications Laboratory
Office: 303-497-8418, Cell: 310-663-5115

Ryan Cabell

unread,
Sep 12, 2025, 12:13:25 PMSep 12
to Aubrey Dugger, Martin Nguyen, wrf-hydro_users, Arezoo RafieeiNasab
Docker could potentially help, especially if the IO is kept on the Docker side and not bridged to the Windows file system (as WSL usually does).

-Ryan


----------------------------------------------------------
Ryan Cabell

Deputy Program Director for Engineering
Hydrometeorological Applications Program
Research Applications Laboratory
National Center for Atmospheric Research

rca...@ucar.edu - 303.497.2880

Soren Rasmussen

unread,
Sep 12, 2025, 12:37:20 PMSep 12
to wrf-hyd...@ucar.edu, Aubrey Dugger, Martin Nguyen, Arezoo RafieeiNasab
Hi,

I think Docker Desktop on Windows uses WSL2 as a backend, so it might manage some resources better but I'm not sure how much of a difference on performance it would make. 

Poking around the internet/chatgpt, it seems that the killer of performance is usually the IO in WSL cases. Make sure you are reading/writing under ~/... (or /mnt/wsl/...). Avoid /mnt/c/

And I haven't tested these but these recommendations for MPI configuration might help.
```
# Keep code+data in ~/project and ~/data (WSL ext4)
export OMPI_MCA_pml=ucx UCX_TLS=shm,tcp UCX_SHM_DEVICES=posix
mpirun --bind-to core --map-by socket -np 16 ./your_mpi_app
```

Cheers,
Soren



--
Soren Rasmussen, Ph.D.
Hydrometeorological Applications Program
Research Applications Laboratory
NSF National Center for Atmospheric Research (NCAR)

Martin Nguyen

unread,
Sep 15, 2025, 9:13:05 AMSep 15
to Soren Rasmussen, rca...@ucar.edu, wrf-hyd...@ucar.edu, Aubrey Dugger, Arezoo RafieeiNasab
Hi Arezoo, Ryan, Aubrey, and Soren,

Thank you so much for all your help. I really appreciate it. and I have tried them and found that (as Ryan and Soren suggested) using 32 cores, reducing the output frequency (per day rather than per hour), changing "/mnt/c/ ..." to "~/..." or "/home/username/...", and applying the code below that helps reducing the runtime to 7 mins to simulate 7 days. I will use this setup to run the 5-year and 10-year simulations to collect streamflow results and discuss with my team installing Ubuntu to make it run faster later. I'll keep you updated.

```
# Keep code+data in ~/project and ~/data (WSL ext4)
export OMPI_MCA_pml=ucx UCX_TLS=shm,tcp UCX_SHM_DEVICES=posix
mpirun --bind-to core --map-by socket -np 32 ./your_mpi_app
```

Kind regards

Martin Nguyen

Martin Nguyen

unread,
Oct 3, 2025, 7:47:36 AM (5 days ago) Oct 3
to Soren Rasmussen, rca...@ucar.edu, wrf-hyd...@ucar.edu, Aubrey Dugger, Arezoo RafieeiNasab

Hi team,

I have been working on this issue, and while there are some improvements since last time, the main problems still remain. The baseflow is still below 1 m³/s, the streamflows only react to very high rainfall events, and they stay almost unchanged when some parameters are varied.

I ran 5-year simulations (1994–1999), with the first year (1994–1995) as spin-up and the rest for sensitivity testing and validation. Below is a summary of what I have tried:

  • Coeff in GWBUCKPARM.nc: I expected this to influence baseflow, but when I tried 0.5 and 2, the streamflows looked almost the same as each other and both were lower than the original case (Coeff = 1). Baseflow also stayed below 1 m³/s.

  • refkdt (1, 2 vs. original 3), RETDEPRTFAC (0.1, 0.5 vs. 1), MannN (scaled by 0.1 and 0.5): streamflows were nearly unchanged from the original case, with baseflow still low.

  • OVROUGHRTFAC (0.1, 0.5 vs. 1), DKSAT (scaled by 0.5, 2, 4, 8), SMCMAX (scaled by 0.8, 1.2): these did change the streamflows as expected, but baseflow was still consistently below 1 m³/s.

  • Zmax (tried 1, 250 vs. 50) and Expon (0.5, 2, 1 vs. 3): still in progress, but I also think these should have an effect on baseflow.

I’ve attached the model setups in vers_003_streamflow (without results due to storage limits, please let me know if you cannot access) along with graphs of those setups for comparison in this email (please note that, if you see any color missing, it means it is overlapped due to highly similar result values between model setups).

Please help me with some of your advice.

Kind regards,


Martin Nguyen


wrfhydro.zip
Reply all
Reply to author
Forward
0 new messages