Running moose/scripts/update_and_rebuild_petsc.sh on HPC

185 переглядів
Перейти до першого непрочитаного повідомлення

Tomas Mondragon

не прочитано,
16 груд. 2019 р., 16:21:0116.12.19
Кому: moose-users
Hello,

I am having problems running the update_and_rebuild_petsc.sh scripts on a couple of HPC machines since the version of PETSc on both is old (3.6 and 3.7) and Hypre is installed on neither machine.

The problem both are running into is is that somewhere in the build process, mpiexec is called, but this causes failures, since the machines require I use mpiexec_mpt, mpirun, or aprun instead of mpiexec.

Is there some way I can change the executable that is used by the update_and_rebuild scripts to run mpi programs?

Fande Kong

не прочитано,
16 груд. 2019 р., 17:05:4216.12.19
Кому: moose...@googlegroups.com
What are the exact error messages? 

We want to know which stage was going to wrong?

Fande,

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/277eb13a-0590-4b1a-a089-09a7c35efd83%40googlegroups.com.

Tomas Mondragon

не прочитано,
16 груд. 2019 р., 17:52:0916.12.19
Кому: moose-users
I have attached the configure.log file from when I ran update_and_rebuild_petsc.sh on both machines. In the log from jim is from an attempt to rebuild after adding --with-mpiexec=mpiexec_mpt to the call to pets/configure in the update and rebuild script
jim_configure.log
onyx_configure.log

Fande Kong

не прочитано,
17 груд. 2019 р., 11:11:2817.12.19
Кому: moose...@googlegroups.com, PETSc users list, Satish Balay
Are you able to run your MPI code using " mpiexec_mpt -n 1 ./yourbinary"?  You need to use --with-mpiexec to specify what exactly command lines you can run, e.g., --with-mpiexec="mpirun -n 1".

I am also CCing the email to PETSc guys who may know the answer to these questions.

Thanks,

Fande,

On Mon, Dec 16, 2019 at 3:52 PM Tomas Mondragon <tom.alex....@gmail.com> wrote:
I have attached the configure.log file from when I ran update_and_rebuild_petsc.sh on both machines. In the log from jim is from an attempt to rebuild after adding --with-mpiexec=mpiexec_mpt to the call to pets/configure in the update and rebuild script

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.

Tomas Mondragon

не прочитано,
18 груд. 2019 р., 14:36:2418.12.19
Кому: moose-users
Yes, but now that I have tried this a couple of different ways with different --with-mpiexec options, I am beginning to suspect that I need to run this as a PBS job on compute nodes rather than a shell script on login nodes. Also, running it with --with-mpiexec="mpirun -n 1" and with --with-mpiexec="path/to/mpirun" don't get me results any different from using --with-mpiexec="mpirun"

I have attached my modified update_and_rebuild_petsc.sh scripts from both machines plus their respective configure.log files.
jim_configure.log
jim_PETSC_update.sh
onyx_configure.log
onyx_PETSC_update.sh

Fande Kong

не прочитано,
19 груд. 2019 р., 11:00:5019.12.19
Кому: moose...@googlegroups.com, tom.alex....@gmail.com, PETSc users list, Satish Balay
Did you try "--with-batch=1"? A suggestion was proposed by Satish earlier (CCing here).

Fande,

On Wed, Dec 18, 2019 at 12:36 PM Tomas Mondragon <tom.alex....@gmail.com> wrote:
Yes, but now that I have tried this a couple of different ways with different --with-mpiexec options, I am beginning to suspect that I need to run this as a PBS job on compute nodes rather than a shell script on login nodes. Also, running it with --with-mpiexec="mpirun -n 1" and with --with-mpiexec="path/to/mpirun" don't get me results any different from using --with-mpiexec="mpirun"

I have attached my modified update_and_rebuild_petsc.sh scripts from both machines plus their respective configure.log files.

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.

Tomas Mondragon

не прочитано,
19 груд. 2019 р., 18:11:1819.12.19
Кому: moose-users
That did help! Thanks Fande and Satish!

However, the script stopped again when it reached the download slepc step. It looks like slepc is no longer at https://bitbucket.com/slepc/slepc.git 

The slepc git repository was moved to https://gitlab.com/slepc/slepc.git on December 10. The petsc config scripts need to be updated to reflect this.

Tomas Mondragon

не прочитано,
10 січ. 2020 р., 13:49:5210.01.20
Кому: moose-users
There seems to be even more wrong with the PETSc .configure scripts.

To get scripts/update_and_rebuild_petsc.sh to compile slepc, I had to wget https://gitlab.com/slepc/slepc/-/archive/master/slepc-master.tar.gz to and alter the .configure option in regard to slepc from --download-slepc=1 \ to --download-slepc=/p/work2/tmondrag/moose/slepc/slepc-master.tar.gz \ .

I also had to do the same for metis, parmetis, and scotch, downloading from https://bitbucket.org/petsc/pkg_metis/get/v5.1.0-p6.tar.gz, https//:bitbucket.org/petsc/pkg_parmetis/get/v4.0.3-p5.tar.gz, and https:gitlab.inria.fr/scotch/scotch/-/archive/master/scotch-master.tar.gz. I haven't figured out where to download fblaslapack though, and I have tried 

wget https://bitbucket.org/petsc/pkg-fblaslapack/get/origin/barry/2019-08-22/fix-syntax-for-nag.tar.gz

wget https://bitbucket.org/petsc/pkg-fblaslapack/get/d28880efc974.zip

wget https://bitbucket.org/petsc/pkg-fblaslapack/get/d28880efc974.tar.gz

wget https://bitbucket.org/petsc/pkg-fblaslapack/get/fix-syntax-for-nag.tar.gz

wget https://bitbucket.org/petsc/pkg-fblaslapack/get/master.tar.gz 


It may be because of the directory structure extracted from the fblaslapack tarball. Anyhow, the urls for the metis, parmetis, and scotch tarballs are the same ones that the configure script attempts to download from. Except for some reason .configure can't download them but wget can. I am attaching my customized update_and_rebuild_petsc script and the configure.log that resulted from running it.

jim_PETSC_configure.log
jim_PETSC_update.pbs

Fande Kong

не прочитано,
10 січ. 2020 р., 15:38:0110.01.20
Кому: moose...@googlegroups.com
You could download fblaslapack from http://ftp.mcs.anl.gov/pub/petsc/externalpackages.

But there is nothing wrong with the original configuration script that we use everyday. I guess you might have a wrong git or your network might not be appropriate.

Thanks,

Fande,

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.

Tomas Mondragon

не прочитано,
10 січ. 2020 р., 20:47:0010.01.20
Кому: moose-users
I might have bad source code. But I downloaded this from the moose framework page in late November 2019 and mooseframework.org and moose framework.inl.gov are unreachable at the moment. It might be worth checking.

The configure script is having trouble with fblaslapack because after it extracts fblaslapack, it expects to find a directory starting with the word 'fblaspack' , because it is the only string in the package's downloaddirnames array, which in this instance is probably set by the function setNames at line 182 in the file petsc/config/BuildSystem/config/package.py

In packages that were successfully compiled, I notice that the downloadnames array is set in the __init__ function of the package's Configure class. For example, in the file petsc/config/BuildSystem/config/packages/PTScotch.py,the __init__ function sets self.downloaddirnames to ['scotch', 'petsc-pkg-scotch']. because of this, the configure script was able to find the directory that scotch was extracted to, 'scotch-master'. Similarly with the metis package. The __init__ function in petsc/config/BuildSystem/config/packages/metis.py sets self.downloaddirnames to ['petsc-pkg-metis'] so that it is able to find the directory 'petsc-pkg-metis-3240f52e8ee'. Otherwise, the downloaddirname array for the metis package would have probably have been set to ['metis'] and the extracted package would have not been found.

Basically, the file petsc/config/BuildSystem/config/packages/fblaslapack.py needs to be updated. An appropriate self.downloaddirnamesarray needs to be set and the self.gitcommit string and self.download array need to be revised. The same needs to be done with petsc/config/BuildSystem/config/packages/slepc.py. There may be numerous other files in petsc/config/BuildSystem/config/packages that need the same re-examination.

Tomas Mondragon

не прочитано,
11 січ. 2020 р., 18:31:0611.01.20
Кому: moose...@googlegroups.com
It looks like these issues were resolved a while ago in the PETSc repository itself, so my problem is that I was following the instructions at https://mooseframework.inl.gov/getting_started/installation/hpc_install_moose.html too closely. I'll try using git instead.

--
You received this message because you are subscribed to a topic in the Google Groups "moose-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/moose-users/2xZsBpG-DtY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/c230c461-db36-4257-a88f-a1479ac97fa5%40googlegroups.com.

Fande Kong

не прочитано,
13 січ. 2020 р., 12:02:0713.01.20
Кому: moose...@googlegroups.com
I see. We did not hit this issue because we often use "--download-xxx=1" instead of "--download-xxx=your/path/xxx.tar.gz".

Anyway, I'm glad your issue was resolved.

Thanks,

Fande,


You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/CAP4KeS%2BSTRb1h99NS%2BtqrBC819uNAHQCPBpPT0b08WeBQ_VSJQ%40mail.gmail.com.

Tomas Mondragon

не прочитано,
24 січ. 2020 р., 17:26:3224.01.20
Кому: moose-users


On Monday, January 13, 2020 at 11:02:07 AM UTC-6, Fande Kong wrote:
I see. We did not hit this issue because we often use "--download-xxx=1" instead of "--download-xxx=your/path/xxx.tar.gz".

Anyway, I'm glad your issue was resolved.

Thanks,

Fande,

Unfortunately, the issue is not resolved as I discovered that the nodes running the script have no access to the internet, so I am stuck with using  "--download-xxx=your/path/xxx.tar.gz". 

Anyhow, I tried again using a newer petsc commit than the one that the moose git repo uses as a submodule, but ran in to trouble while petsc/config/BuildSystem/config/setCompilers.py was searching for a cuda compiler. The new bit of code in the getExecutable method in petsc/config/BuildSystem/config/base.py lines 294-318 will cause errors if a directory on the search path is inaccessible due to file permissions.

I think running .configure with a "--with-cudac=0" flag might help me avoid that bit of code.

I am attaching my new configure.log in case anyone sees anything else wrong with it.
jim_PETSC_configure.log

Tomas Mondragon

не прочитано,
24 січ. 2020 р., 18:09:1424.01.20
Кому: moose-users
The "--with-cudac=0" flag did make it avoid the new code in getExecutable for a while, but it got tripped up while configure searched for lgrind. Is there any flag for avoiding lgrind or should I just wait on a fix to getExecutable?

Fande Kong

не прочитано,
27 січ. 2020 р., 12:24:5027.01.20
Кому: moose...@googlegroups.com
You could try "--with-lgrind=0". Not sure why lgrind was on. By default, it is off.

BTW, what did you have when you were using PETSc submodule with manual downloaded packages?

Thanks,

Fande,

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.

Tomas Mondragon

не прочитано,
30 січ. 2020 р., 10:44:0930.01.20
Кому: moose-users
Running configure with the flag "--with-lgrind=0" still caused configure to look for lgrind.

My configure command is

  ./configure $(echo $PFX_STR) \

      --download-hypre=downloaded_thirdParty_tarballs/hypre-93baaa8c9.tar.gz \

      --with-debugging=no \

      --with-shared-libraries=1 \

      --download-fblaslapack=downloaded_thirdParty_tarballs/fblaslapack-v3.4.2-p2.tar.gz \

      --download-metis=downloaded_thirdParty_tarballs/metis-v5.1.0-p7.tar.gz \

      --download-ptscotch=downloaded_thirdParty_tarballs/scotch-v6.0.9.tar.gz \

      --download-parmetis=downloaded_thirdParty_tarballs/parmetis-v4.0.3.tar.gz \

      --download-superlu_dist=downloaded_thirdParty_tarballs/superLU-DIST-v6.2.0.tar.gz \

      --download-mumps=downloaded_thirdParty_tarballs/mumps-v5.2.1-p2.tar.gz \

      --download-scalapack=downloaded_thirdParty_tarballs/scalapack-v2.1.0-p1.tar.gz \

      --download-slepc=downloaded_thirdParty_tarballs/slepc-bf89b9d.tar.gz \

      --with-mpi=1 \

      --with-cxx-dialect=C++11 \

      --with-fortran-bindings=0 \

      --with-sowing=0 \

      --with-batch=1 \

      --with-cudac=0 \

      --with-lgrind=0 \

      $*


Tomas

On Monday, January 27, 2020 at 11:24:50 AM UTC-6, Fande Kong wrote:
You could try "--with-lgrind=0". Not sure why lgrind was on. By default, it is off.

BTW, what did you have when you were using PETSc submodule with manual downloaded packages?

Thanks,

Fande,

On Fri, Jan 24, 2020 at 4:09 PM Tomas Mondragon <tom.alex...@gmail.com> wrote:
The "--with-cudac=0" flag did make it avoid the new code in getExecutable for a while, but it got tripped up while configure searched for lgrind. Is there any flag for avoiding lgrind or should I just wait on a fix to getExecutable?

On Friday, January 24, 2020 at 4:26:32 PM UTC-6, Tomas Mondragon wrote:


On Monday, January 13, 2020 at 11:02:07 AM UTC-6, Fande Kong wrote:
I see. We did not hit this issue because we often use "--download-xxx=1" instead of "--download-xxx=your/path/xxx.tar.gz".

Anyway, I'm glad your issue was resolved.

Thanks,

Fande,

Unfortunately, the issue is not resolved as I discovered that the nodes running the script have no access to the internet, so I am stuck with using  "--download-xxx=your/path/xxx.tar.gz". 

Anyhow, I tried again using a newer petsc commit than the one that the moose git repo uses as a submodule, but ran in to trouble while petsc/config/BuildSystem/config/setCompilers.py was searching for a cuda compiler. The new bit of code in the getExecutable method in petsc/config/BuildSystem/config/base.py lines 294-318 will cause errors if a directory on the search path is inaccessible due to file permissions.

I think running .configure with a "--with-cudac=0" flag might help me avoid that bit of code.

I am attaching my new configure.log in case anyone sees anything else wrong with it.

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose...@googlegroups.com.

Fande Kong

не прочитано,
30 січ. 2020 р., 11:30:1830.01.20
Кому: moose...@googlegroups.com
Could you please share the configuration log?

Fande

To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/095881e4-592d-427a-ad84-6cbe5fb8fe2e%40googlegroups.com.

Tomas Mondragon

не прочитано,
30 січ. 2020 р., 11:54:2130.01.20
Кому: moose-users
Configuration log is attached
jim_PETSC_configure.log

Tomas Mondragon

не прочитано,
30 січ. 2020 р., 16:52:2230.01.20
Кому: moose-users
I altered the problematic part of the getExecutable method in petsc/config/BuildSystem/config/base.py just to get over this obstacle. 

  def getExecutable(self, names, path = [], getFullPath = 0, useDefaultPath = 0, resultName = '', setMakeMacro = 1):

    '''Search for an executable in the list names

       - Each name in the list is tried for each entry in the path until a name is located, then it stops

       - If found, the path is stored in the variable "name", or "resultName" if given

       - By default, a make macro "resultName" will hold the path'''

    found = 0

    if isinstance(names,str) and names.startswith('/'):

      path = os.path.dirname(names)

      names = os.path.basename(names)


    if isinstance(names, str):

      names = [names]

    if isinstance(path, str):

      path = path.split(os.path.pathsep)

    if not len(path):

      useDefaultPath = 1


    def getNames(name, resultName):

      import re

      prog = re.match(r'(.*?)(?<!\\)(\s.*)',name)

      if prog:

        name = prog.group(1)

        options = prog.group(2)

      else:

        options = ''

      if not resultName:

        varName = name

      else:

        varName = resultName

      return name, options, varName


    varName = names[0]

    varPath = ''

    for d in path:

      for name in names:

        name, options, varName = getNames(name, resultName)

        if self.checkExecutable(d, name):

          found = 1

          getFullPath = 1

          varPath = d

          break

      if found: break

    if useDefaultPath and not found:

      for d in os.environ['PATH'].split(os.path.pathsep):

        for name in names:

          name, options, varName = getNames(name, resultName)

          if self.checkExecutable(d, name):

            found = 1

            varPath = d

            break

        if found: break

    if not found:

      dirs = self.argDB['with-executables-search-path']

      if not isinstance(dirs, list): dirs = [dirs]

      for d in dirs:

        for name in names:

          name, options, varName = getNames(name, resultName)

          if self.checkExecutable(d, name):

            found = 1

            getFullPath = 1

            varPath = d

            break

        if found: break


    if found:

      if getFullPath:

        setattr(self, varName, os.path.abspath(os.path.join(varPath, name))+options)

      else:

        setattr(self, varName, name+options)

      if setMakeMacro:

        self.addMakeMacro(varName.upper(), getattr(self, varName))

    else:

      self.logWrite('  Unable to find programs '+str(names)+' providing listing of each search directory to help debug\n')

      self.logWrite('    Path provided in Python program\n')

      for d in path:

        if os.path.isdir(d):

          try:

            self.logWrite('      '+str(os.listdir(d))+'\n')

          except OSError as e:

            self.logWrite('      '+e.strerror+'\n')

        else:

          self.logWrite('      Warning '+d+' is not a directory\n')

      if useDefaultPath:

        if os.environ['PATH'].split(os.path.pathsep):

          self.logWrite('    Path provided by default path\n')

          for d in os.environ['PATH'].split(os.path.pathsep):

            if os.path.isdir(d):

              try:

                self.logWrite('      '+str(os.listdir(d))+'\n')

              except OSError as e:

                self.logWrite('      '+e.strerror+'\n')

            else:

              self.logWrite('      Warning '+d+' is not a directory\n')

      dirs = self.argDB['with-executables-search-path']

      if not isinstance(dirs, list): dirs = [dirs]

      if dirs:

        self.logWrite('    Path provided by --with-executables-search-path\n')

        for d in dirs:

          if os.path.isdir(d):

            try:

              self.logWrite('      '+str(os.listdir(d))+'\n')

            except OSError as e:

              self.logWrite('      '+e.strerror+'\n')

          else:

            self.logWrite('      Warning '+d+' is not a directory\n')

    return found


I wasn't able to figure out why lgrind was being searched for, but running configure that wasn't just lgrind that sets of this bit of code, c2html does as well.

Anyhow, with this fix, I was able to get configure to continue on until it failed to compile parmetis. Looking at configure.log it looks like it's because of a bad path to libmetis.

On Thursday, January 30, 2020 at 10:54:21 AM UTC-6, Tomas Mondragon wrote:
Configuration log is attached
jim_PETSC_configure_2.log

Alexander Lindsay

не прочитано,
30 січ. 2020 р., 16:58:1430.01.20
Кому: moose...@googlegroups.com, PETSc
Tomas,  make sure you're always using "reply all" when the PETSc users list is involved...

--
You received this message because you are subscribed to the Google Groups "moose-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to moose-users...@googlegroups.com.

Fande Kong

не прочитано,
30 січ. 2020 р., 17:28:5930.01.20
Кому: petsc-users, moose...@googlegroups.com, Satish Balay, Tomas Mondragon
Bring conversation to the MOOSE list as well.

Fande,

On Thu, Jan 30, 2020 at 3:26 PM Satish Balay via petsc-users <petsc...@mcs.anl.gov> wrote:
Pushed one more change - move duplicate/similar code into a function.

Satish

On Thu, 30 Jan 2020, Satish Balay via petsc-users wrote:

> Ah - missed that part. I've updated the branch/MR.
>
> Thanks!
> Satish
>
> On Thu, 30 Jan 2020, Tomas Mondragon wrote:
>
> > Just to be extra safe, that fix should also be applied to the
> > 'with-executables-search-path' section as well, but your fix did help me
> > get past the checks for lgrind and c2html.
> >
> > On Thu, Jan 30, 2020, 3:47 PM Satish Balay <ba...@mcs.anl.gov> wrote:
> >
> > > I pushed a fix to branch balay/fix-check-files-in-path - please give it a
> > > try.
> > >
> > > https://gitlab.com/petsc/petsc/-/merge_requests/2490
> > >
> > > Satish
> > >
> > > On Thu, 30 Jan 2020, Satish Balay via petsc-users wrote:
> > >
> > > > The issue is:
> > > >
> > > > >>>
> > > > [Errno 13] Permission denied: '/pbs/SLB'
> > > > <<<
> > > >
> > > > Try removing this from PATH - and rerun configure.
> > > >
> > > > This part of configure code should be fixed.. [or protected with 'try']
> > > >
> > > > Satish
> > > >
> > > > On Thu, 30 Jan 2020, Fande Kong wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > It looks like a bug for me.
> > > > >
> > > > > PETSc was still trying to detect lgrind even we set "--with-lgrind=0".
> > > The
> > > > > configuration log is attached. Any way to disable lgrind detection.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Fande
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > ---------- Forwarded message ---------
> > > > > From: Tomas Mondragon <tom.alex....@gmail.com>
> > > > > Date: Thu, Jan 30, 2020 at 9:54 AM
> > > > > Subject: Re: Running moose/scripts/update_and_rebuild_petsc.sh on HPC
> > > > > To: moose-users <moose...@googlegroups.com>
> > > > >
> > > > >
> > > > > Configuration log is attached
> > > > >
> > > > >
> > > >
> > >
> > >
> >
>

Tomas Mondragon

не прочитано,
31 січ. 2020 р., 11:06:1431.01.20
Кому: moose-users
Thanks for the change to base.py. Pulling the commit, confirm was able to skip over lgrind and c2html. I did have a problem with Parmetis, but that was because I was using an old ParMetis commit accidentally. Fixed by downloading the right commit of ParMetis.

My current problem is with Hypre. Apparently --download-hypre cannot be used with --with-batch=1 even if the download URL is on the local machine. The configuration.log that resulted is attached for anyone who may be interested.
jim_PETSC_configure.log

Tomas Mondragon

не прочитано,
31 січ. 2020 р., 12:58:4831.01.20
Кому: moose...@googlegroups.com, petsc...@mcs.anl.gov
Hypre problem resolved. PETSc commit 05f86fb made in August 05, 2019 added the line 'self.installwithbatch  = 0' to the __init__ method of the Configure class in the file petsc/config/BuildSystem/config/packages/hypre.py to fix a bug with hypre installation on Cray KNL systems. Since the machine I was installing os was an SGI system, I decided to try switching to 'self.installwithbatch = 1' and it worked! The configure script was finally able to run to completion.

Perhaps there can be a Cray flag for configure that can control this, since it is only Cray's that have this problem with Hypre?

For my benefit when I have to do this again - 
To get moose/petsc/scripts/update_and_rebuild_petsc.sh to run on an SGI system as a batch job, I had to:

Make sure the git (gnu version) module was loaded
git clone moose
cd to the petsc directory and git clone the petsc submodule, but make sure to pull the latest commit. The commit that the moose repo refers to is outdated.
cd back to the moose directory, git add petsc and git commit so that the newest petsc commit gets used by the update script. otherwise the old commit will be used.
download the tarballs for fblaspack, hypre, metis, mumps, parmetis, scalapack, (PT)scotch, slepc, and superLU_dist. The URLS are in the __init__ methods of the relevant files inmost/petsc/config/BuildSystem/config/packages/
alter moose/scripts/update_and_rebuild_petsc.sh script so that it is a working PBS batch job. Be sure to module swap to the gcc compiler and module load git (gnu version) and alter the ./configure command arguments
     adding
             --with-cudac=0
             --with-batch=1
    changing
             --download-<package>=/path/to/thirdparty/package/tarball
If the supercomputer is not a Cray KNL system, change line 26 of moose/petsc/config/BuildSystem/config/packages/hypre.py from 'self.installwithbath = 0' to 'self.installwithbatch = 1', otherwise, install hypre on its own and use --with-hypre-dir=/path/to/hypre in the ./configure command

On Fri, Jan 31, 2020 at 10:06 AM Tomas Mondragon <tom.alex....@gmail.com> wrote:
Thanks for the change to base.py. Pulling the commit, confirm was able to skip over lgrind and c2html. I did have a problem with Parmetis, but that was because I was using an old ParMetis commit accidentally. Fixed by downloading the right commit of ParMetis.

My current problem is with Hypre. Apparently --download-hypre cannot be used with --with-batch=1 even if the download URL is on the local machine. The configuration.log that resulted is attached for anyone who may be interested.

--
You received this message because you are subscribed to a topic in the Google Groups "moose-users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/moose-users/2xZsBpG-DtY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to moose-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/a34fa09e-a4f5-4225-8933-34eb36759260%40googlegroups.com.

Tomas Mondragon

не прочитано,
1 лют. 2020 р., 02:09:3301.02.20
Кому: Smith, Barry F., moose-users, petsc-users
Thanks, that does sound useful! 

On Fri, Jan 31, 2020, 6:23 PM Smith, Barry F. <bsm...@mcs.anl.gov> wrote:

   You might find this option useful.

   --with-packages-download-dir=<dir>
       Skip network download of package tarballs and locate them in specified dir. If not found in dir, print package URL - so it can be obtained manually.


   This generates a list of URLs to download so you don't need to look through the xxx.py files for that information. Conceivably a script could gather this information from the run of configure and get the tarballs for you.

    Barry
Відповісти всім
Відповісти автору
Переслати
0 нових повідомлень