I am currently calibrating SWAT+ for a forested watershed in northern Wisconsin and running into an issue where the basin-level water balance appears reasonable, but simulated streamflow at the outlet is substantially underpredicted. When I compare specific discharge (Q/A), my modeled watershed is about an order of magnitude lower (≈1/10) than values reported for similar nearby forested catchments in Wisconsin.
I have explored adjustments to lateral flow, evapotranspiration, and groundwater-related parameters, but outlet discharge remains consistently low. I would appreciate any insights on what processes or parameter groups are most likely to cause this type of discrepancy in SWAT+, particularly in systems where subsurface flow is expected to play a significant role.