How to judge the output of NEMO5

102 views
Skip to first unread message

emma.x...@gmail.com

unread,
Aug 24, 2013, 11:29:21 PM8/24/13
to student-cluste...@googlegroups.com
Hi,
I have recently run almost all the inputdeck in the given public_examples of NEMO5, I have got several kinds of results, some can get a good results printing "simulation took ... s, have a good day! ",some print out " exit status of rank 0: return code 1 ", some print out " first unconverged value (error) ". Are all these results accepted? Do we need to correct the inputdeck? Could you please give us some advice?
By the way, I am confused about the judgement standard of output. As there are some kinds of output such as "VTK","dat","xyz",and we can get some information print on the screen, can you just give a detailed explanation about the competition rules?

Thanks a lot!
Emma 

Verónica Vergara Larrea (SCC13 Science Lead)

unread,
Aug 25, 2013, 8:44:48 PM8/25/13
to student-cluste...@googlegroups.com
Hi Emma,

The input decks in "public_examples" include contributions from the nanoHUB community, some but not all of them have been verified and validated by the NEMO5 team. We recommend you use the examples in the "regression_test" folder. These tests are run nightly by the development team, and should be much more stable and should run without errors. If any of the regression tests give errors, please let us know.

Inside each test case, you will find a reference output that you can use to compare the accuracy of your results. For example, the "bulk_Si" test has a reference output called "Si_energies_ref.dat" that you can use to measure the accuracy of your results. Many tests use ".dat" reference files, the ones that don't will contain reference files ("*_ref.*" files) with different extensions. 

In the "regression_test" folder there is a script called smartdiff.py that you can use to compare the accuracy of your results with different tolerance values. 

For the competition, we will give instructions on the specific case, and format that results should be submitted. Then, the Application Specialist will compare each team's results with the reference results. If results are within the tolerance chosen by the Application Specialist, they will be considered correct.

Hope this helps! If you have any questions, please let me know.

Best,
Verónica

Verónica Vergara Larrea (SCC13 Science Lead)

unread,
Aug 29, 2013, 9:25:19 AM8/29/13
to student-cluste...@googlegroups.com
Hi Emma, 

I'm adding the rest of the thread below so the responses are available to all the teams.

Best, 
Verónica 

~~~~~~~~~ Question from Emma ~~~~~~~~~~~~
Hi, 
Thanks for your advice! It really helps us a lot. From now on, I am going to test the examples in the folder, and I already find one problem. 
ps: I use the nemostatic to run the dataset. In the inputdeck NWFET_1*1nmGaAs_QTBM_Poisson_sp3sstar for _samsung.in, there is libMesh internal logic error. 
I send a picture of the error, can you have a look at that? If I find any other errors, I will ask for your help. 

By the way, I am wondering whether every case will give a specific ./all.mat, and I am not very clear about the test.info. I have two questions: 
1. Does the test.info give the most appropriate number of processes of each dataset? 
2. What is the specific meaning of "cores" "devel" "skip" in test.info

Best Regards, 
Emma 

Image showing: "Error in libMesh internal logic" 

~~~~~~~~~ Response ~~~~~~~~~~~~
Hi Emma,

The info file is used by the job submission script that actually runs all the regression tests, for your purposes, it only provides information on the default number of cores, and whether the test is run nightly or not. If the info file has "skip" in it, it means the test is not currently being run. Is there any other error message about the libMesh error? What OS are you using? I'm CCing Jim Fonseca, NEMO5 Application Specialist, as he might know what the issue with this test might be. 

Jim, I was wondering if you help me answer Emma's questions: 
- Is the default number of cores in the info file the most appropriate, recommended? Or is it simply the number used in the regression tests? 
- Do you know of any issues in the GaAsNWFET test using the static nemo build? 

Thanks in advance for your help! Also, please feel free to correct if any of my statements above are incorrect. 

Best, 
Verónica

Verónica Vergara Larrea (SCC13 Science Lead)

unread,
Aug 29, 2013, 12:44:59 PM8/29/13
to student-cluste...@googlegroups.com
Also, see response from Jim Fonseca, NEMO5 App Specialist:

re:static build
Your mileage may vary with static builds. We hope to have more robust ones in the future. Compile from source or run via nanoHUB workspace.

1. Does the test.info give the most appropriate number of processes of each dataset?
The tests should work with those numbers of cores in the info file. It may be possible to get better performance with different numbers of cores. 

2. What is the specific meaning of "cores" "devel" "skip" in test.info?
What Veronica said is correct. --It's not important, that is for our internal use.

Jim
Reply all
Reply to author
Forward
0 new messages