To figure it out post the PBS file, please.
Andrei
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/3f7915c0-9950-40a4-8c9f-c1916a8a945f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/CAENK9sV%3DBbKZVNjoLYgLNytL5C%3Dw5nuSQdeGfddP26AmiH5nmg%40mail.gmail.com.
How many processes are you trying to run? And how many degrees of freedom does your mesh have?Gavin Ridley
Hi,I actually used a standard mpirun, I'm sorry but I am new to cluster computing and I didn't use any PBS. I believe it's ok to do it this way.Mateusz
On Tue, May 7, 2019 at 2:59 PM andrewryh <andr...@gmail.com> wrote:
Hello,
To figure it out post the PBS file, please.
Andrei
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-users+unsubscribe@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/3f7915c0-9950-40a4-8c9f-c1916a8a945f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-users+unsubscribe@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/CAENK9sV%3DBbKZVNjoLYgLNytL5C%3Dw5nuSQdeGfddP26AmiH5nmg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-users+unsubscribe@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/48220045-6C27-4CD2-8B7C-3A9F8C7EDE7F%40vols.utk.edu.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/CAENK9sVip3a6Myze29XDybLuiDrM2VsA2s5n-APLfuvjjgXx1Q%40mail.gmail.com.
Hello everyone,I've started using a supercomputer to perform some simulations that are getting heavier and heavier the deeper I go with MSRE recreation in Moltres. After successfully having installed the software on the cluster today, it turned out that a simulation I am trying to run is terminated just after the first time step begins. I am getting the following error message:Bad termination of one of your application processesPID 16267Exit code: 139......Your application terminated with the exit string: Segmentation fault (signal 11)This typically refers to a problem with your application.The thing is that the same input file works perfectly on my personal computer.
Moreover, another input file that uses the same input data and external files but is only a short eigenvalue problem was run on the cluster without any errors. Therefore I am asking you for advice, since I do not really know what could be wrong, since the input files are correct and the application seems to work properly. Is it possibly because of lack of memory? Or does this error mean anything else?Thank you for your help,Mateusz Pater
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/e269f789-3461-4e89-8394-8a10786a0efa%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/1d4d6ec4-c046-4090-9397-37a005e9f6ec%40googlegroups.com.
What preconditioner are you using? E.g. have you specified a `-pc_type` in your `petsc_options`?
On Wed, May 8, 2019 at 5:25 AM M. Pater <mateusz...@gmail.com> wrote:
--I had used mpirun -np 60 ../DIR/moltres-opt -i 4group.iToday I tried to run it using only one core and also using another option with --n-threads=60. So far no errors but it still hasn't got to the first time step calculation, so I can't say if it resolves any issueIt might happen if you try to use more resources that are available.How are you requesting resources on your cluster? What is spec (node configuration)? And post exact mpirun command used, please
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltre...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/7aaa8c44-d9c6-46bd-adf0-068f72871faf%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/CANFcJrEpK_o4sfbO1n3nPMfk5c9bHEeb_Mh5uGUtZJSzz%3DXiRA%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/c5b08b8f-7722-43a4-bc00-81ee7a7a03cf%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/315b39ad-ad67-4eb3-b12b-f992e96dc909%40googlegroups.com.
All right, that is 68fefed05677dec7f417ad0ce5c4b6a6f17db200 on my laptop and e5abd5ff24525421086c3028c409e3f6eb47d5cd on the cluster
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/c5e67c98-09a2-4043-9600-d848be7e8763%40googlegroups.com.
So I updated my personal Moose version to the commit that is installed on the cluster and the simulation crashed. It means that we found the root cause of the problem. I'm going to change the cluster's hash to another one. Thank you for tracking this down!
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/5e6ea340-f827-41cd-a181-d2ed72ec2ba6%40googlegroups.com.
I'm glad that we have reproducibility now between the two systems...but we need to fix the actual problem, or else it will be dangerous to update Moltres to using new versions of MOOSE. So it would be very helpful if you could provide your input file so I can track down what's causing the bad ghosting.
On Tue, May 14, 2019 at 6:17 AM M. Pater <mateusz...@gmail.com> wrote:
So I updated my personal Moose version to the commit that is installed on the cluster and the simulation crashed. It means that we found the root cause of the problem. I'm going to change the cluster's hash to another one. Thank you for tracking this down!--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-users+unsubscribe@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/5e6ea340-f827-41cd-a181-d2ed72ec2ba6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-users+unsubscribe@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/CANFcJrF9WQLDtiXM2749%2BDdXGAbPbPcY_G_J_8dTnE0QG%3DATdA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
Oh, of course, sorry I forgot. The case I am running is actually to be found in the Moltres github repository under the problems/2017_annals... and it's the 4group.i file. I hope it helps
On Tuesday, May 14, 2019, Alexander Lindsay <alexlin...@gmail.com> wrote:
I'm glad that we have reproducibility now between the two systems...but we need to fix the actual problem, or else it will be dangerous to update Moltres to using new versions of MOOSE. So it would be very helpful if you could provide your input file so I can track down what's causing the bad ghosting.
On Tue, May 14, 2019 at 6:17 AM M. Pater <mateusz...@gmail.com> wrote:
So I updated my personal Moose version to the commit that is installed on the cluster and the simulation crashed. It means that we found the root cause of the problem. I'm going to change the cluster's hash to another one. Thank you for tracking this down!--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/5e6ea340-f827-41cd-a181-d2ed72ec2ba6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/CANFcJrF9WQLDtiXM2749%2BDdXGAbPbPcY_G_J_8dTnE0QG%3DATdA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
--Mateusz PaterUniversité Paris-Saclay, CEA-INSTNInnoEnergy Master's SchoolEuropean Master's in Nuclear Energy
--
You received this message because you are subscribed to the Google Groups "moltres-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moltres-user...@googlegroups.com.
To post to this group, send email to moltre...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moltres-users/CAENK9sUDHXMpi3BL-7z0j0aOGHKdYn-RsMz1%2B7%3D0Ubuh-ePQFQ%40mail.gmail.com.