MPAS model run Error

427 views
Skip to first unread message

vineeth krishnan

unread,
Aug 17, 2015, 5:23:42 AM8/17/15
to mpas-atmos...@googlegroups.com
Hi
Greetings from kv.
I've been trying to run MPAS model and have some concerns to ask.

I ran the MPAS model at 120 KM resolution successfully. Now, I'm trying to run the model with 60 km  resolution. But it shows some error that I'm failed to sort out. The model run starts and then it throws some error. It shows the below shown error. 

MPAS IO Error: Bad return value from PIO

Log file  is attached along with the mail. Could you able to tell, why such an error happens, once it started running writing output? 

I suspect that the error arises because of memory issues. If it is because of the memory issues, what could be an ideal configuration where I can run the model at 60km resolution or even higher?

Another thing what I want to ask that, Instead of running the model on global scale, could it be possible to run on regional scale ?

Kindly reply me. 
Thanks in advance
kv


log.0000.err

MPAS-Atmosphere Help

unread,
Aug 17, 2015, 11:24:55 AM8/17/15
to MPAS-Atmosphere Help, vinee...@gmail.com
Hi, KV.

I've got a few questions that might help us to track down the issue:

1) On how many nodes/cores are you running the 60-km case when it fails?
2) Which compiler and compiler version are you using?
3) Was PIO compiled with support for both NetCDF and Parallel-NetCDF?
3) If you turn off the 'output' stream (by setting output_interval="none" in the definition of the stream), and only write out the 'diagnostics' stream, does them model run successfully?
4) After the model fails, are there any partial output files that are created (even with zero size), or are the only files created by the run the log.0000.* files?

Judging by the log output, you're running MPAS v4.0, right?

Best regards,
Michael

vineeth krishnan

unread,
Aug 18, 2015, 1:59:59 AM8/18/15
to MPAS-Atmosphere Help
Hi
Thank you very much for your reply.

Yes, you are right that I'm running MPAS V4.0

1. I was using 48 core for running 60km mesh.
2. I've used gfortran &gcc and ifort&icc - both the compilation caused the same error
3. Yes, PIO was compiled for NETCDF and parallel Netcdf
4. The model is creating first time step files for both diagnostic and output data. - The digagnostic output file has size of 136 MB and mpasout file has 8.8 GB size. After that it is terminating 
5. I've run the model by setting output_interval="none" , but it also gave the same result. Only diagnostic file created ( size 136 MB), then terminated. log file for the corresponding run is attached along with the same mail.

Could the model  be used  for regional run. My Interest lies in the Indian sub region rather the the entire globe.
 
Let me know any further information are required.

Looking forward for your reply.
Thanks
kv









log.0000.err

Dominikus Heinzeller

unread,
Aug 18, 2015, 2:06:54 AM8/18/15
to vineeth krishnan, MPAS-Atmosphere Help
Hi kv,

From looking at your output file you can see that there are problems with the model stability.

You are using a 12minute time step for a 60km mesh. That is way above the recommended timestep for WRF (6s per km grid size, i.e., 360s=6min). And with MPAS, even this timestep can be too large for a stable model run, especially during spinup from an initial conditions file. Try 360s and if this doesn’t work try 300s.

As far as I know - Michael, please correct me - there is no plan (and no point really) to run MPAS in regional mode - that’s why we have the variable-resolution meshes.

Cheers

Dom

--
You received this message because you are subscribed to the Google Groups "MPAS-Atmosphere Help" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mpas-atmosphere-...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
<log.0000.err>

vineeth krishnan

unread,
Aug 19, 2015, 5:24:10 AM8/19/15
to MPAS-Atmosphere Help
Hi

This didn't solve the issue.
Could you able to give any further suggestion ?

Thanks
kv

MPAS-Atmosphere Help

unread,
Aug 20, 2015, 12:13:02 PM8/20/15
to MPAS-Atmosphere Help, vinee...@gmail.com
Hi, kv.

In the original log.0000.err file you posted, it looked like the model failed when writing the initial output before taking any timesteps. However, in your second log.0000.err file, it looks like you had gotten past this initial error -- the model was taking timesteps and, as Dom pointed out, it looks like the timestep was too large, leading to an unstable integration. Was it necessary to disable the output streams (output_interval="none") to get past the initial I/O problem, or did you resolve the I/O issue?


If it is the case that you've gotten past the I/O problem and are still facing the model stability problem, could you attach your namelist.init_atmosphere and namelist.atmosphere files? Based on the log.0000.err file, it looks like you're actually running on the 30-km mesh, which has 655362 horizontal grid cells; in this case, you'll need a smaller timestep, perhaps 180 or 150 seconds.


As Dom also mentioned, there is no regional capability in MPAS-Atmosphere right now.


Best regards,
Michael


On Wednesday, August 19, 2015 at 3:24:10 AM UTC-6, vineeth krishnan wrote:
Hi

This didn't solve the issue.
Could you able to give any further suggestion ?

Thanks
kv
On Tue, Aug 18, 2015 at 11:36 AM, Dominikus Heinzeller wrote:
Hi kv,

From looking at your output file you can see that there are problems with the model stability.

You are using a 12minute time step for a 60km mesh. That is way above the recommended timestep for WRF (6s per km grid size, i.e., 360s=6min). And with MPAS, even this timestep can be too large for a stable model run, especially during spinup from an initial conditions file. Try 360s and if this doesn’t work try 300s.

As far as I know - Michael, please correct me - there is no plan (and no point really) to run MPAS in regional mode - that’s why we have the variable-resolution meshes.

Cheers

Dom
To unsubscribe from this group and stop receiving emails from it, send an email to mpas-atmosphere-help+unsub...@googlegroups.com.

vineeth krishnan

unread,
Aug 21, 2015, 5:54:04 AM8/21/15
to MPAS-Atmosphere Help
Hi Michael

Thank you very much for your reply.

I've made the static file based on the 60 km mesh downloaded from MPAS website. So I think that the I'm running the model at 60 km resolution.  I've tried the model with time step of 720 360 300 180 120 secs. But in all the runs, model failed at same instant as shown below. 

If output steam is set to 1hr / 6hr - model shows I/O Error - mpasout file writing upto 8.8 GB then failed

if output stream is set to none - model failed after passing initial time step. 

Both the time, the diag files is 136 MB.  I'm attaching corresponding log files  and namelists along with this mail.

Please let me know  your suggestions.
Thanks
kv







To unsubscribe from this group and stop receiving emails from it, send an email to mpas-atmosphere-...@googlegroups.com.
namelist.atmosphere
streams.atmosphere
log.0000.err_outputstream_1hr
log.0000.err_outputstream_none

MPAS-Atmosphere Help

unread,
Aug 21, 2015, 1:20:22 PM8/21/15
to MPAS-Atmosphere Help, vinee...@gmail.com
Hi, kv.

Thanks very much for the attachments. Besides setting the timestep (config_dt) according to the mesh resolution, you'll also need to set the dissipation length scale (config_len_disp); generally, you can set config_len_disp to the smallest nominal grid distance in your mesh, so for the 30-km mesh, you'd set this variable to 30000.

The mesh with 655362 horizontal grid cells should be the 30-km mesh; you can check this by running 'ncdump -v dcEdge x1.655362.init.nc | more' and paging down until you get to the actual values in the field -- these should all be in the neighborhood of 30000 meters.

Could you try the following:
1) Set config_dt = 120.0
2) Set config_len_disp = 30000.0
3) Turn off output by setting output_interval="none" for the "output" and "diagnostics" streams in your streams.atmosphere file
4) Run the model.

If the model runs stably, then we can move on to the issue of the model output; otherwise, it may be easiest to focus for now on the issue of model integration.

Best regards,
Michael
Reply all
Reply to author
Forward
0 new messages