Hello,I wrote down my understanding about mesh setting in Moose with some questions. I hope, as long as it is correct, the explanation below would be helpful to others who use big mesh.To begin with, there are two meshing options: replicated and distributed. Those can be passed to parallel_type in Mesh block in your input file.The default argument for parallel_type is DEFAULT. This makes Moose use the replicated mesh unless you do something special to invoke the distributed mesh.
1. Replicated mesh formatWith replicated mesh, all cores in each simulation have entire mesh information. If you do `mpiexec -n 64 ...`, there are 64 complete mesh data sets, not storing mesh data for each node. Is this correct? The typical benefit from this is that an implementation of
some features is easier. I believe some existing features in Moose don't support the distributed mesh. In other words, those features are only for the replicated mesh. The example of those can be found:Using replicated mesh simplifies things, but if your mesh size is too large, too much memory is taken away for redundant mesh data. In past, I ran some simulations with 10 to 20 million elements with about 60 cores. I observed very high memory usage during the simulations, at the beginning in particular. I didn't pay attention to the meshing setting, so I was using the replicated mesh, and a large portion of memory was used for the mesh data.If the number of elements is more than 1 million, you can get benefits using the distributed mesh format as discussed in the thread below.2. Distributed mesh formatEach core only owns some part of mesh as needed. There are (at least) two ways to use the distributed mesh. One is just adding --distributed-mesh to a command you execute. The other is doing a pre-splitting as described here.Doing pre-splitting is useful if your mesh is too large to be placed a RAM on your machine or node without pre-splitting.You can use the Nemesis mesh format to output data, which creates exodus files for each process. Those files can be visualized on Paraview with ease. Also, is it right even with distributed mesh, we can export data as Exodus to put all the data into one file? This would be helpful for me.
It would be great if you could give me a feedback.Thank you,Shohei Ogawa
--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users+unsubscribe@googlegroups.com.
Visit this group at https://groups.google.com/group/moose-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/CAHfiw0e60Fpu5AMgsSw13p8EQ%3Dsfm__%3DO3x9Bi%2BXMWA_dxhAnw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Very good summary, Shohei!Few minor comments.On Wed, Jun 13, 2018 at 3:46 PM, Shohei Ogawa <ogawa...@gmail.com> wrote:Hello,I wrote down my understanding about mesh setting in Moose with some questions. I hope, as long as it is correct, the explanation below would be helpful to others who use big mesh.To begin with, there are two meshing options: replicated and distributed. Those can be passed to parallel_type in Mesh block in your input file.The default argument for parallel_type is DEFAULT. This makes Moose use the replicated mesh unless you do something special to invoke the distributed mesh.What do you exactly mean 'something special'?parallel_type parameter will be overwritten by command-line argument --distributed-mesh if I understand correctly. --distributed-mesh will be passed to multiapps. Also parallel_type is possibly not needed when the mesh type is nemesis. It should not be set to DISTRIBUTED when pre-splitting the mesh currently. (I did not verify these very carefully though.) MOOSE needs to protect this parameter from wrongly usage imo.
1. Replicated mesh formatWith replicated mesh, all cores in each simulation have entire mesh information. If you do `mpiexec -n 64 ...`, there are 64 complete mesh data sets, not storing mesh data for each node. Is this correct? The typical benefit from this is that an implementation offor each node => for each core. I believe the understanding is correct.
some features is easier. I believe some existing features in Moose don't support the distributed mesh. In other words, those features are only for the replicated mesh. The example of those can be found:Using replicated mesh simplifies things, but if your mesh size is too large, too much memory is taken away for redundant mesh data. In past, I ran some simulations with 10 to 20 million elements with about 60 cores. I observed very high memory usage during the simulations, at the beginning in particular. I didn't pay attention to the meshing setting, so I was using the replicated mesh, and a large portion of memory was used for the mesh data.If the number of elements is more than 1 million, you can get benefits using the distributed mesh format as discussed in the thread below.2. Distributed mesh formatEach core only owns some part of mesh as needed. There are (at least) two ways to use the distributed mesh. One is just adding --distributed-mesh to a command you execute. The other is doing a pre-splitting as described here.Doing pre-splitting is useful if your mesh is too large to be placed a RAM on your machine or node without pre-splitting.You can use the Nemesis mesh format to output data, which creates exodus files for each process. Those files can be visualized on Paraview with ease. Also, is it right even with distributed mesh, we can export data as Exodus to put all the data into one file? This would be helpful for me.I do not know if there is a tool to do this.
It would be great if you could give me a feedback.Thank you,Shohei Ogawa
--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
--It would be great if you could give me a feedback.Thank you,Shohei Ogawa
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
Visit this group at https://groups.google.com/group/moose-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/CAHfiw0e60Fpu5AMgsSw13p8EQ%3Dsfm__%3DO3x9Bi%2BXMWA_dxhAnw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users+unsubscribe@googlegroups.com.
Visit this group at https://groups.google.com/group/moose-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/85ae2abe-49f3-4b31-b805-cd2c827be039%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/85ae2abe-49f3-4b31-b805-cd2c827be039%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
Visit this group at https://groups.google.com/group/moose-users.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/CABk6%3D%2BZ%3Dfe1q_Xfy-TgDBtuk7UW44hRa_BksnM3Ot%2BEeNo71yg%40mail.gmail.com.