Snow 3d Model

0 views
Skip to first unread message

Agnella Datson

unread,
Aug 5, 2024, 4:21:26 AM8/5/24
to necfaurondesc
Wehave many different websites with the products you find here, customized for your country. If you switch to the website specific to your country, you'll be able to enjoy having your area set as the default domain for all our maps, and your country's most important cities in the forecast overview.

This product displays output from the European ECMWF global model. Global models produce forecasts for the entire world usually twice daily. Choose any country in the world using the menus to the left where you will also find a diverse range of products to choose from including temperature, pressure, precipitation, and much more. The European model runs 10 days out into the future but, like all models, gets less accurate as time goes on.


Fractional snow cover (SCF) as a function of SWE at Col de Porte for models that did not switch off their subgrid parameterizations or impose complete snow cover. HTESSEL is not shown as it is the same as HTESSEL-ML. ORCHIDEE-MICT did not force SCF = 1, but values were missing from the file provided for evaluation.


This paper discusses results from model simulations at five mountain sites (Col de Porte, France; Reynolds Mountain East, Idaho, United States; Senator Beck and Swamp Angel, Colorado, United States; Weissfluhjoch, Switzerland), one urban maritime site (Sapporo, Japan), and one Arctic site (Sodankyl, Finland); results for three forested sites will be discussed in a separate publication. Details of the sites, and of forcing and evaluation data are presented in Mnard et al. (2019). Although the 97 site-years of data for these seven reference sites may still be insufficient, they do respond to the demands of previous MIPs by providing more sites in different snowy environments over more years.


Our working hypothesis was formed at the design stage of ESM-SnowMIP and is explicit in Krinner et al. (2018): more sites over more years will help us to identify crucial processes and characteristics that need to be improved as well as previously unrecognized weaknesses in snow models. However, months of analyzing results led us to conclude the unexpected: more sites, more years, and more variables do not provide more insight into key snow processes. Instead, this leads to the same conclusions as previous MIPs: albedo is still a major source of uncertainty, surface exchange parameterizations are still problematic, and individual model performance is inconsistent. In fact, models are less classifiable with results from more sites, years and evaluation variables. Our initial, or false, hypothesis had to be killed off.


The pace of advances in snow modeling and other fields in climate research is limited by the time it takes to collect long-term datasets and to develop methods for measuring complex processes. Furthermore, the logistical challenges of collecting reliable data in environments where unattended instruments are prone to failure continue to restrict the spatial coverage of quality snow datasets.


Errors in the ESM-SnowMIP driving and evaluation data are not discussed here because they are discussed in Mnard et al. (2019): implicit in the following sections is that a model can only be as good as the data driving it and against which it is evaluated.


Mean SWE and surface temperature NRMSEs in Fig. 1 are generally low: below 0.6 for half of the models and 1 or greater for only four models. Biases are also relatively low: less than 2C in surface temperature and less than 0.2 in normalized SWE for four out of five sites in Fig. 2. The sign of the biases in surface temperature are the same for at least four out of five sites for all except four models (JULES-I, ORCHIDEE-E, ORCHIDEE-MICT, and SWAP). The six models with the largest negative biases in SWE are among the seven models that do not represent liquid water in snow. The seventh model, RUC, has its largest negative bias at Sapporo, where rain-on-snow events are common. Wind-induced snow redistribution, which no model simulates at a point, is partly responsible for Senator Beck being one of the two sites with largest SWE NRMSE in more than half of the models.


As in previous studies (e.g., Etchevers et al. 2004; Essery 2013), the specific albedo scheme or its complexity does not determine model performance in ESM-SnowMIP. Neither of the two models with the smallest range of biases, CLASS and EC-Earth, imposed SCF = 1 and both use simple albedo schemes in which snow albedo decreases depending on time and temperature. Snow albedo parameterizations (Table 1) determine rates at which albedo varies, but ranges within which the schemes operate are still determined by user-defined minimum and maximum snow albedos to which models are very sensitive. For most models these parameters are the same at all sites, but measurements suggest that they differ; it is unclear whether some of these variations are due to site-specific measurement errors (e.g., instruments or vegetation in the radiometer field of view). This issue should be investigated further as this is not the first time that model results have been inconclusive because of such uncertainties (e.g., Essery et al. 2013).


A different philosophy from some other MIPs has been followed here such that resubmission of simulations was encouraged if initial results did not appear to be representative of the intended model behavior. Table 4 provides details of the hard- and soft-coded errors identified as a result of discussions that led to 16 of the 26 models re-submitting their results, some more than once. One model was excluded at a late stage because the modeling team did not identify the source of some very large errors that caused the model to be an outlier in all analyses and, therefore, would not have added any scientific value to this paper.


Model errors can be statistically quantified; quantifying human errors is somewhat more challenging. A methodology widespread in high-risk disciplines (e.g., medicine, aviation and nuclear power), the Human Reliability Assessment, may be the closest analog, but it is a preventative measure. Concerns about reproducibility and traceability have motivated a push for analogous methodologies in the geosciences (Gil et al. 2016), but most remain retrospective steps to retrace at the paper writing stage.


Figure 4 quantifies the differences in the performance of the two variables (SWE and soil temperature) and models most affected by human errors before and after resubmission. For some models (JULES-GL7, JSBACH-PF, HTESSEL-ML), SWE NRMSE before resubmission are up to 5 times higher than after and soil temperature bias double that of corrected simulations (ORCHIDEE-I). Human errors in models and, as discussed in Mnard et al. (2019) for the first 10 reference sites in ESM-SnowMIP, in data are inevitable, and this snow MIP shows that they are widespread. The language we use to describe numerical models has distanced them from the fact that they are not, in fact, pure descriptions of physics but rather equations and configuration files written by humans. Errare humanum est, perseverare diabolicum. Mnard et al. (2015) showed that papers already published had used versions of JULES that included bugs affecting turbulent fluxes and causing early snowmelt. There is no requirement for authors to update papers after publication if retrospective enquiries identify some of the published results as erroneous. In view of the many errors identified here, further investigations are required to start understanding how widespread errors in publications are. Whether present in initialization files or in the source code, these errors impair or slow progress in our understanding of snow modeling because they misrepresent the ability of models to simulate snow mass and energy balances.


As with many other areas of science, calls for reproducibility of model results to become a requirement for publication are gaining ground (Gil et al. 2016). Table 1 was initially intended to list the parameterizations considered most important in snow modeling (Essery et al. 2013; Essery 2015), with, as is conventional (e.g., Rutter et al. 2009; Krinner et al. 2018), a single reference per model. Referencing the parameterizations in the 27 models requires, in fact, 63 papers and technical reports; a more detailed version of the table and associated references are included in the supplemental material. The lead author first identified 51 references. Then the modeling teams were asked to correct or to confirm them and to provide references whenever gaps remained. However, some suggested the wrong references, others revised their initial answers, and a few even discovered that some parameterizations are not described at all. Not only is it extremely rare to find complete documentation of a model in a single publication, it is also difficult to find all parameterizations described at all in the literature. When this happens, some parameterizations are described in publications for other models. Often, the most recent publication refers to previous ones, which may or may not be the first to have described the model, comprehensively or not. Incomplete documentation would be an annoying but unimportant issue if this exercise had not led to the identification of some of the errors discussed in the previous subsection.


There are 615 data files in NetCDF (.nc4) format with this dataset. There is also one companion file provided in .pdf format which provides additional information on SnowModel. The companion file must be downloaded separately from the data files.

3a8082e126
Reply all
Reply to author
Forward
0 new messages