If we look at the history of hydrology, this discussion has been had in slightly different form many times before.
Taking the case of Robert Horton and the hydrologists of his time, who established the “era of infiltration” in the 1930s they used a pragmatic concept for practical analysis and prediction purposes even though it was clear to them that the process representation was not correct.
In the 1960s and the Stanford Watershed model onwards (up to the present) hydrologists have continued to use more or less ad hoc conceptual models for practical analysis and prediction purposes even though those models were known to be wrong, yet optimisation would give something (more or less) acceptable (I am sure Vit Klemes is looking down on all this with a wry smile!!)
Now, in the “era of machine learning” we are using a whole gamut of hugely parameterised AI methods for practical analysis and prediction purposes without any sense of understanding why they give “better” results after calibration. In the hybrid models, there is in addition the distinct possibility of circular reasoning by building in incorrect structural functioning rather than real information.
And yet – what the era of machine learning is telling us is that not all information is being extracted from the data – so the really important question is how can we UNDERSTAND what we are missing without going round in circles. I suggested this in a HP commentary* some time ago now but I have not seen much advance towards that end to date.
Keith Beven