Dirac 12.2 not installing properly with 64-bit integers

622 views
Skip to first unread message

Tuomas Löytynoja

unread,
Feb 25, 2013, 5:28:40 AM2/25/13
to dirac...@googlegroups.com
Hi all,

I'm a PhD student from the University of Oulu and a beginner with Dirac as well as installing programs on Unix. With some help I have managed to install Dirac 12.2 with 32-bit integers on our local cluster. However, I would now would like to use the KR-CI module that requires 64-bit integer installation of the program. I have been trying to do this with the help of the following programs:


CMake 2.8.9
Python 2.6.6
Intel(R) Fortran Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 12.0.5.220 Build 20110719
OpenMPI 1.4.4 (with 64-bit integers)


This is what my installation script looks like:


#!/bin/bash

module load cmake/2.8.9
module load openmpi-intel-i8 # this line is on the .bashrc as well

source /export/intel/composerxe/bin/compilervars.sh intel64 # and this
export PATH=/export/openmpi/1.4.4/intel-i8/bin:$PATH # and this
export LD_LIBRARY_PATH=/export/openmpi/1.4.4/intel-i8/lib:$LD_LIBRARY_PATH # and this
export LD_LIBRARY_PATH=/export/intel/composerxe-2011.5.220/mkl/lib/intel64:$LD_LIBRARY_PATH # and this

rm -rf build
export MATH_ROOT='/export/intel/composerxe-2011.5.220/mkl/'
./setup --int64 --fc=mpif90 --cc=mpicc --cxx=mpic++ --mpi=on
cd build
make -j 6


In addition, I added the options ' -xHost -mkl=sequential' to Intel compiler flags to the files CFlags.cmake and FortranFlags.cmake. This was the output from the setup script:


FC=mpif90 CC=mpicc CXX=mpic++ cmake -DENABLE_MPI=ON -DENABLE_SGI_MPT=OFF -DENABLE_OPENMP=OFF -DENABLE_BLAS=ON -DENABLE_LAPACK=ON -DENABLE_TESTS=OFF -DENABLE_64BIT_INTEGERS=ON -DCMAKE_BUILD_TYPE=Release /home/loytyntu/bin/DIRAC-12.2-Source_64

-- mpi.mod matches current compiler, setting -DUSE_MPI_MOD_F90
-- No 64-bit integer MPI interface found, will use 32-bit integer MPI interface
-- MPI-2 support found
-- The Fortran compiler identification is Intel
-- The C compiler identification is Intel 12.0.0.20110719
-- The CXX compiler identification is Intel 12.0.0.20110719
-- Check for working Fortran compiler: /export/openmpi/1.4.4/intel-i8/bin/mpif90
-- Check for working Fortran compiler: /export/openmpi/1.4.4/intel-i8/bin/mpif90 -- works
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Checking whether /export/openmpi/1.4.4/intel-i8/bin/mpif90 supports Fortran 90
-- Checking whether /export/openmpi/1.4.4/intel-i8/bin/mpif90 supports Fortran 90 -- yes
-- Check for working C compiler: /export/openmpi/1.4.4/intel-i8/bin/mpicc
-- Check for working C compiler: /export/openmpi/1.4.4/intel-i8/bin/mpicc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /export/openmpi/1.4.4/intel-i8/bin/mpic++
-- Check for working CXX compiler: /export/openmpi/1.4.4/intel-i8/bin/mpic++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Could NOT find Git (missing: GIT_EXECUTABLE)
-- Found LAPACK: MKL
-- Found BLAS: MKL
-- Found MPI_C: /export/openmpi/1.4.4/intel-i8/lib/libmpi.so;/export/openmpi/1.4.4/intel-i8/lib/libopen-rte.so;/export/openmpi/1.4.4/intel-i8/lib/libopen-pal.so;/usr/lib64/libdl.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so
-- Found MPI_CXX: /export/openmpi/1.4.4/intel-i8/lib/libmpi_cxx.so;/export/openmpi/1.4.4/intel-i8/lib/libmpi.so;/export/openmpi/1.4.4/intel-i8/lib/libopen-rte.so;/export/openmpi/1.4.4/intel-i8/lib/libopen-pal.so;/usr/lib64/libdl.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so
-- Found MPI_Fortran: /export/openmpi/1.4.4/intel-i8/lib/libmpi_f90.so;/export/openmpi/1.4.4/intel-i8/lib/libmpi_f77.so;/export/openmpi/1.4.4/intel-i8/lib/libmpi.so;/export/openmpi/1.4.4/intel-i8/lib/libopen-rte.so;/export/openmpi/1.4.4/intel-i8/lib/libopen-pal.so;/usr/lib64/libdl.so;/usr/lib64/libnsl.so;/usr/lib64/libutil.so
-- Performing Test MPI_COMPATIBLE
-- Performing Test MPI_COMPATIBLE - Success
-- Performing Test MPI_COMPILER_MATCHES
-- Performing Test MPI_COMPILER_MATCHES - Success
-- Performing Test MPI_ITYPE_MATCHES
-- Performing Test MPI_ITYPE_MATCHES - Failed
-- Performing Test MPI_2_COMPATIBLE
-- Performing Test MPI_2_COMPATIBLE - Success
-- Configuring done
-- Generating done
-- Build files have been written to: /home/loytyntu/bin/DIRAC-12.2-Source_64/build

configure step is done
now you need to compile the sources

to compile with configured parameters (recommended):
$ cd build
$ make

to modify configured parameters and then compile:
$ cd build
$ ccmake ..
$ make


After that the compiling part seems work. If I now run runtest with '--quick' option the following tests crash:


[ 13%] CRASH bss_energy HF.bss_sfb.cc_cisd Z80H.lsym.dir 00m02s
[ 14%] CRASH bss_energy HF.dk2_sfb.pnuc.cc_cisd Z80H.lsym.dir 00m01s
[ 15%] CRASH cc_energy_and_mp2_dipole ccsd.small H2O 00m11s
[ 28%] CRASH krci_energy be.d2h Be.d2h 00m01s
[ 29%] CRASH krci_properties_omega_tdm h2 H2 00m01s
[ 30%] CRASH krci_properties_perm_dipmom h3 H3 00m12s
[ 34%] CRASH lucita_short He He 00m01s
[ 35%] CRASH mcscf_energy Be Be 00m00s
[ 39%] CRASH reladc_sip adclevel3_real ne_d2h 00m02s
[ 40%] CRASH reladc_sip hcn_complex_3 hcn_cs 00m14s


For example the krci_energy test for Be stops immediately after the KR-CI part of the calculation begins and this error message is shown:


**** dirac-executable stderr console output : ****
*** The MPI_Type_f2c() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[ta7:13878] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!

directory: /home/loytyntu/dirac/test/krci_energy
inputs: F.mol & f.inp


It seems to me that most of the tests failing require 64-bit integers to work. Also, it says in the setup output 'No 64-bit integer MPI interface found'. What could be the problem here? Is it really the OpenMPI part or maybe the math libraries? Does the problem lie on the library paths? Or could it be that OpenMPI cannot handle 64-bit integers for some reason? When I type 'ompi_info -a | grep 'Fort integer size', I get 'Fort integer size: 8' so it wouldn't seem to be the case.

I'm looking forward for you answer.



With best regards
Tuomas Löytynoja

André Luiz Fassone Canova

unread,
Feb 25, 2013, 5:41:13 AM2/25/13
to dirac...@googlegroups.com
HI,

in my case, I tried to use it on a Grid, (GridUNESP) that has the intel mpi 64-bit, but the dirac enters an infinite loop when running mpi mode.

So I put together a small cluster, 2 machines (for testing) and another with 9 machines (execucção).

But Mageia MPI does not support 64-bit or 64-bit lapack .... Ok, I'm using the 64-bit and mkl for MPI followed the following tutorial:

http://www.diracprogram.org/doc/release-12/installation/int64/mpi.html

Good luck,

André Luiz Fassone Canova
alfc...@gmail.com
alfc...@ig.com.br

"Existem 10 tipos de pessoas: as que entendem código binário e as que não entendem!"


2013/2/25 Tuomas Löytynoja <tuomas.l...@gmail.com>

Tuomas Löytynoja

--
You received this message because you are subscribed to the Google Groups "dirac-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dirac-users...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Radovan Bast

unread,
Feb 25, 2013, 5:51:40 AM2/25/13
to dirac...@googlegroups.com
hi Tuomas,

there are two problems happening here.

1) the crashing tests.
this has probably nothing to do with the integer
type but the reason is that some modules in DIRAC cannot be executed
in sequential
once they have been compiled in parallel. these modules need to be
called through MPI interface
even when running on one processor.
you can call this "bad programing practice" from our side and it will
be fixed in future.
if you run these tests individually with runtest --mpi=1 (or more then
1) they should work.

2) > -- Performing Test MPI_ITYPE_MATCHES - Failed
this is strange since it looks like you did everything right.
what cmake does here behind the curtain is to compile the program
cmake/parallel-environment/test-MPI-itype-compatibility.F90

please try the following:
$ mpif90 cmake/parallel-environment/test-MPI-itype-compatibility.F90

what error do you see on your screen?

good luck!
radovan

Stefan Knecht

unread,
Feb 25, 2013, 6:28:40 AM2/25/13
to dirac...@googlegroups.com
hi Tuomas,

i agree with Radovan on 1). running the quick test set with
./runtest --mpi=1 --quick
should work.
as to 2). in addition to what Radovan said could you send us the output of
./ompi_info -a

you write that your MPI-library has been installed with 64-bit integers.
if so, we should see it from this output as the default Fortran integer
size should be 8.
Note, that Dirac12 has an MPI interface library which takes care of the
32-bit/64-bit issue wrt MPI so that even the MPI library might not be
64-bit integer, you can still safely use
it with a 64-bit integer Dirac. this was not possible with Dirac11 and
previous versions.

with best regards,

stefan


On 25/02/13 11.51, Radovan Bast wrote:
> hi Tuomas,
>
> there are two problems happening here.
>
> 1) the crashing tests.
> this has probably nothing to do with the integer
> type but the reason is that some modules in DIRAC cannot be executed
> in sequential
> once they have been compiled in parallel. these modules need to be
> called through MPI interface
> even when running on one processor.
> you can call this "bad programing practice" from our side and it will
> be fixed in future.
> if you run these tests individually with runtest --mpi=1 (or more then
> 1) they should work.
>
> 2) > -- Performing Test MPI_ITYPE_MATCHES - Failed
> this is strange since it looks like you did everything right.
> what cmake does here behind the curtain is to compile the program
> cmake/parallel-environment/test-MPI-itype-compatibility.F90
>
> please try the following:
> $ mpif90 cmake/parallel-environment/test-MPI-itype-compatibility.F90
>
> what error do you see on your screen?
>
> good luck!
> radovan
>
>
>
> On Mon, Feb 25, 2013 at 11:28 AM, Tuomas L�ytynoja
>> Tuomas L�ytynoja
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "dirac-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to dirac-users...@googlegroups.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>


--

Stefan Knecht
Laboratorium fuer Physikalische Chemie
HCIG 230
ETH Zuerich
Wolfgang-Pauli-Str. 10
CH-8093 Zuerich
Schweiz

phone: +41 44 633 22 19 fax: +41 44 633 15 94
email: stefan...@phys.chem.ethz.ch
web: http://www.theochem.uni-duesseldorf.de/users/stefan/index.htm
http://www.reiher.ethz.ch/people/knechste

Tuomas Löytynoja

unread,
Feb 25, 2013, 6:48:34 AM2/25/13
to dirac...@googlegroups.com
Hi,

I tried to run a test with command './runtest --mpi=1 --tests="krci_energy" --binary=/home/loytyntu/bin/DIRAC-12.2-Source_64/build/dirac.x' but it still crashed. I'm guessing that now the crash was due the missing 64-bit integers?

Here's the test result for the itype compability:

[loytyntu@taygeta DIRAC-12.2-Source_64]$ pwd
/home/loytyntu/bin/DIRAC-12.2-Source_64
[loytyntu@taygeta DIRAC-12.2-Source_64]$ mpif90 cmake/parallel-environment/test-MPI-itype-compatibility.F90
[loytyntu@taygeta DIRAC-12.2-Source_64]$

So I don't get any kind of error message. As a result of the compilation I get a new executable a.out.



Regards
Tuomas

Stefan Knecht

unread,
Feb 25, 2013, 6:51:57 AM2/25/13
to dirac...@googlegroups.com
hi,


On 25/02/13 12.48, Tuomas Löytynoja wrote:
Hi,

I tried to run a test with command './runtest --mpi=1 --tests="krci_energy" --binary=/home/loytyntu/bin/DIRAC-12.2-Source_64/build/dirac.x' but it still crashed. I'm guessing that now the crash was due the missing 64-bit integers?
what is the output of
$ ./runtest --mpi=1 --tests="krci_energy" --verbose --binary=/home/loytyntu/bin/DIRAC-12.2-Source_64/build/dirac.x'
?
can you send me the output file(s) in
tests/krci_energy/*.out

along with the output of
$ opmi_info -a

thanks in advance and with best regards,

stefan

Tuomas Löytynoja

unread,
Feb 25, 2013, 7:27:06 AM2/25/13
to dirac...@googlegroups.com
Hi Stephan,

The result of the krci_energy test is the following:


[loytyntu@taygeta DIRAC-12.2-Source_64]$ ./runtest --mpi=1 --tests="krci_energy" --verbose --binary=/home/loytyntu/bin/DIRAC-12.2-Source_64/build/dirac.x

result test inp mol/xyz time



**** dirac-executable stderr console output : ****
*** The MPI_Type_f2c() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[taygeta.oulu.fi:14824] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!

directory: /home/loytyntu/bin/DIRAC-12.2-Source_64/test/krci_energy
inputs: Be.d2h.mol & be.d2h.inp
[ 50%] CRASH krci_energy be.d2h Be.d2h 00m02s


**** dirac-executable stderr console output : ****
*** The MPI_Type_f2c() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[taygeta.oulu.fi:14839] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!

directory: /home/loytyntu/bin/DIRAC-12.2-Source_64/test/krci_energy
inputs: F.mol & f.inp
[100%] CRASH krci_energy f F 00m04s

summary: 0 ok
2 crashed

total time: 00h00m06s


I added the output files as an attachment along with the ompi_info -data.

I must add that I haven't installed any of the other programs including OpenMPI myself, but it still should be 64-bit.



Tuomas

Tuomas Löytynoja

unread,
Feb 25, 2013, 7:31:49 AM2/25/13
to dirac...@googlegroups.com
For whatever reason I couldn't send attachments, so I will just add the raw text of the result here.

Here's be.d2h_Be.d2h.out:

DIRAC pam run in /home/loytyntu/bin/DIRAC-12.2-Source_64/test/krci_energy
DIRAC serial starts by allocating 64000000 words (488 MB) of memory
DIRAC serial has no limitations in place for the amount of dynamically allocated memory

Note: maximum allocatable memory for serial run can be set by pam --aw

 *******************************************************************************
 *                                                                             *
 *                                O U T P U T                                  *
 *                                   from                                      *
 *                                                                             *
 *                   @@@@@    @@   @@@@@     @@@@     @@@@@                    *
 *                   @@  @@        @@  @@   @@  @@   @@                        *
 *                   @@  @@   @@   @@@@@    @@@@@@   @@                        *
 *                   @@  @@   @@   @@ @@    @@  @@   @@                        *
 *                   @@@@@    @@   @@  @@   @@  @@    @@@@@                    *
 *                                                                             *
 *                                                                             *
 %}ZS)S?$=$)]S?$%%>SS$%S$ZZ6cHHMHHHHHHHHMHHM&MHbHH6$L/:<S///</:|/:|:/::!:.::--:%
 $?S$$%$$$$?%?$(SSS$SSSHMMMMMMMMMMMMMMMMMM6H&6S&SH&&k?6$r~::://///::::::.:::-::$
 (%?)Z??$$$(S%$>$)S6HMMMMMMMMMMMMMMMMMMMMMMR6M]&&$6HR$&6(i::::::|i|:::::::-:-::(
 $S?$$)$?$%?))?S/]#MMMMMMMMMMMMMMMMMMMMMMMMMMHM1HRH9R&$$$|):?:/://|:/::/:/.::.:$
 SS$%%?$%((S)?Z[6MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM&HF$$&/)S?<~::!!:::::::/:-:|.S
 SS%%%%S$%%%$$MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMHHHHHHM>?/S/:/:::`:/://:/::-::S
 ?$SSSS?%SS$)MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM/4?:S:/:::/:::/:/:::.::?
 S$(S?S$%(?$HMMMMMMMMMMMMMMMMM#&7RH99MMMMMMMMMMMMMMMMMMHHHd$/:::::/::::::-//.:.S
 (?SS(%)S&HMMMMMMMMMMMMMMMMM#S|///???$9HHMMMMMMMMMDSZ&1S/?</>?~:///::|/!:/-:-:.(
 $S?%?<H(MMMMMMMMMMMMMMMMR?`. :.:`::<>:``?/*?##*)$:/>       `((%://::/:::::/::/$
 S$($$)HdMMMMMMMMMMMMMMMP: . `   `  `    `      `-            `Z<:>?::/:::::|:iS
 c%%%&HMMMMMMMMMMMMMMMM6:                                      `$%)>%%</>!:::::c
 S?%/MMMMMMMMMMMMMMMMMMH-                                        /ZSS>?:?~:;/::S
 $SZ?MMMMMMMMMMMMMMMMMH?.                                        \"&((/?//?|:::$
 $%$%&MMMMMMMMMMMMMMMMM:.                                          ?%/S:</::/::$
 ($?}&HMMMMMMMMMMMMMMMM>:                                          $%%<?/i:|i::(
 Z$($&MMMMMMMMMMMMMMMMHk(:.  . -                                   .S/\?\?/!:/:Z
 (??$<HMMMMMMMMMMMMMMMFk|:   -.-.                                  :%/%/(:/:ii|(
 SZ(S?]MMMMMMMMMMMMMMHS?:- -  ::.:                                  |/S:</::?||S
 $%)$$(MMMMMMMMMMMMMMR):`:. :.:::`,,/bcokb/_                       :S?%?|~:/:/:$
 %$$%$)[[?$?MMMMMMMMMM: :.:-.::::$7?<&7&MMMMMMM#/           _ .. ..:</?:(:/::::%
 $$$?Z?HHH~|/MMMMMMMMM/`.-.:.:/:%%%%?dHMMMMMMMMMMH?,-   .,bMMMM6//./i~/~:<:::/:$
 $($S$M//::S?ZHMMMMMH/:.`:::.:/%S/&MMHMMMMMMMMRM&><   ,HMMMMMMMF  :::?:///:|:::$
 )[$S$S($|_i:#>::*H&?/::.::/:\"://:?>>`:&HMHSMMMM$:`-   MMHMMMMHHT .)i/?////::/)
 $$[$$>$}:dHH&$$--?S::-:.:::--/-:``./::>%Zi?)&/?`:.`   `H?$T*\" `  /%?>%:)://ii$
 $&=&/ZS}$RF<:?/-.|%r/:::/:/:`.-.-..|::S//!`\"``          >??:    `SS<S:)!/////$
 Z&]>b[Z(Z?&%:::../S$$:>:::i`.`. `-.`  `                         ,>%%%:>/>/!|:/Z
 $$&/F&1$c$?>:>?/,>?$$ZS/::/:-: ...                              |S?S)S?<~:::::$
 &$&$&$k&>>|?<:<?((/$[/?)i~/:/. - -                              S?:%%%?/:::/::&
 =[/Z[[Fp[MMH$>?Z&S$$$/$S///||..-           -.-                  /((S$:%<:///:/=
 $&>1MHHMMMM6M9MMMM$Z$}$S%/:::.`.            .:/,,,dcb>/:.       ((SSSS%:)!//i|$
 MMMMMMMMMMMR&&RRRHR&&($(?:|i::-             .:%&S&$[&H&``     ../>%;/?>??:<::>M
 MMMMMMMMMMMMS/}S$&&H&[$SS//:::.:.   . . .v</Jq1&&D]&M&<,      :/::/?%%)S>?://:M
 MMMMMMMMMMMM?}$/$$kMM&&$(%/?//:..`.  .|//1d/`://?*/*/\"` `     .:/(SS$%(S%)):%M
 MMMMMMMMMMMM(}$$>&&MMHR#$S%%:?::.:|-.`:;&&b/D/$p=qpv//b/~`   :/~~%%??$=$)Z$S+;M
 MMMMMMMMMMMM[|S$$Z1]MMMMD[$?$:>)/::: :/?:``???bD&{b<<-`     .,:/)|SS(}Z/$$?/<SM
 MMMMMMMMMMMM||$)&7k&MMMMH9]$$??Z%|!/:i::`  `` .             SS?SS?Z/]1$/&$c/$SM
 MMMMMMMMMMMM| -?>[&]HMMMMMMMH1[/7SS(?:/..-` ::/Sc,/_,     _<$?SS%$S/&c&&$&>//<M
 MMMMMMMMMMMMR  `$&&&HMM9MMMMMMM&&c$%%:/:/:.:.:/\?\?/\    _MMHk/7S/]dq&1S<&&></M
 MMMMMMMMMMMMM?  :&96MHMMMMMMMMMMMHHk[S%(<<:// `         ,MMMMMMM&/Z6H]DkH]1$&&M
 MMMMMMMMMMMMMD    99H9HMMMMMMMMMMMMMMMb&%$<:i.:....    .MMMMMMMMM6HHHRH&H&H1SFM
 MMMMMMMMMMMMMM|   `?HMMMMMMMMMMMMMMMMMMMHk6k&>$&Z$/?_.bHMMMMMMMMMMM&6HRM9H6]ZkM
 MMMMMMMMMMMMMMM/    `TMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMHMH6RH&R6&M
 MMMMMMMMMMMMMMMM    -|?HMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMFHH6HMD&&M
 MMMMMMMMMMMMMMMMk  ..:~?9MMMMMMMMMMMMM#`:MMMMMMMMMMMMMMMMMMMMMMMMMMMMM9MHkR6&FM
 MMMMMMMMMMMMMMMMM/  .-!:%$ZHMMMMMMMMMR` dMMMMMMMMMMMMMMMMMMMMMMMMMMMMM9MRMHH9&M
 MMMMMMMMMMMMMMMMMML,:.-|::/?&&MMMMMM` .MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMHRMH&&6M
 MMMMMMMMMMMMMMMMMMMc%>/:::i<:SMMMMMMHdMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMHHM&969kM
 MMMMMMMMMMMMMMMMMMMMSS/$$/(|HMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMHH&HH&M
 MMMMMMMMMMMMMMMMMMMM6S/?/MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMR96H1DR1M
 MMMMMMMMMMMMMMMMMMMMM&$MHMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMHMH691&&M
 MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMH&R&9ZM
 MMMMMMMMMMMMMMMMMMMMMMMMMRHMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMH&96][6M
 MMMMMMMMMMMMMMMMMMMMMMMMp?:MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM96HH1][FM
 MMMMMMMMMMMMMMMMMMMMMMMM> -HMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMH&1k&$&M
 *******************************************************************************
 *                                                                             *
 *         =========================================================           *
 *                     Program for Atomic and Molecular                        *
 *          Direct Iterative Relativistic All-electron Calculations            *
 *         =========================================================           *
 *                                                                             *
 *                                                                             *
 *    Written by:                                                              *
 *                                                                             *
 *    Hans Joergen Aa. Jensen  University of Southern Denmark    Denmark       *
 *    Radovan Bast             Universite P. Sabatier, Toulouse  France        *
 *    Trond Saue               Universite P. Sabatier, Toulouse  France        *
 *    Lucas Visscher           VU University Amsterdam           Netherlands   *
 *                                                                             *
 *    with contributions from:                                                 *
 *                                                                             *
 *    Vebjoern Bakken          University of Oslo                Norway        *
 *    Kenneth G. Dyall         Schrodinger, Inc., Portland       USA           *
 *    Sebastien Dubillard      University of Strasbourg          France        *
 *    Ulf Ekstroem             University of Oslo                Norway        *
 *    Ephraim Eliav            University of Tel Aviv            Israel        *
 *    Thomas Enevoldsen        University of Southern Denmark    Denmark       *
 *    Timo Fleig               Universite P. Sabatier, Toulouse  France        *
 *    Olav Fossgaard           University of Tromsoe             Norway        *
 *    Andre S. P. Gomes        CNRS/Universite Lille 1           France        *
 *    Trygve Helgaker          University of Oslo                Norway        *
 *    Jon K. Laerdahl          University of Oslo                Norway        *
 *    Johan Henriksson         Linkoeping University             Sweden        *
 *    Miroslav Ilias           Matej Bel University              Slovakia      *
 *    Christoph R. Jacob       Karlsruhe Institute of Technology Germany       *
 *    Stefan Knecht            University of Southern Denmark    Denmark       *
 *    Stanislav Komorovsky     University of Tromsoe             Norway        *
 *    Ossama Kullie            University of Kassel              Germany       *
 *    Christoffer V. Larsen    University of Southern Denmark    Denmark       *
 *    Yoon Sup Lee             KAIST, Daejeon                    South Korea   *
 *    Huliyar S. Nataraj       BME/Budapest Univ. Tech. & Econ.  Hungary       *
 *    Patrick Norman           Linkoeping University             Sweden        *
 *    Malgorzata Olejniczak    University of Warsaw              Poland        *
 *    Jeppe Olsen              Aarhus University                 Denmark       *
 *    Young Choon Park         KAIST, Daejeon                    South Korea   *
 *    Jesper K. Pedersen       University of Southern Denmark    Denmark       *
 *    Markus Pernpointner      University of Heidelberg          Germany       *
 *    Kenneth Ruud             University of Tromsoe             Norway        *
 *    Pawel Salek              Stockholm Inst. of Technology     Sweden        *
 *    Bernd Schimmelpfennig    Forschungszentrum Karlsruhe       Germany       *
 *    Jetze Sikkema            VU University Amsterdam           Netherlands   *
 *    Andreas J. Thorvaldsen   University of Tromsoe             Norway        *
 *    Joern Thyssen            University of Southern Denmark    Denmark       *
 *    Joost van Stralen        VU University Amsterdam           Netherlands   *
 *    Sebastien Villaume       Linkoeping University             Sweden        *
 *    Olivier Visser           University of Groningen           Netherlands   *
 *    Toke Winther             University of Southern Denmark    Denmark       *
 *    Shigeyoshi Yamamoto      Chukyo University                 Japan         *
 *                                                                             *
 *    For the complete list of contributors to the DIRAC code see our          *
 *    website http://www.diracprogram.org                                      *
 *                                                                             *
 *    This is an experimental code. The authors accept no responsibility       *
 *    for the performance of the code or for the correctness of the results.   *
 *                                                                             *
 *    The code (in whole or part) is not to be reproduced for further          *
 *    distribution without the written permission of the authors or            *
 *    their representatives.                                                   *
 *                                                                             *
 *    If results obtained with this code are published, an                     *
 *    appropriate citation would be:                                           *
 *                                                                             *
 *    DIRAC, a relativistic ab initio electronic structure program,            *
 *    Release DIRAC12 (2012),                                                  *
 *    written by H. J. Aa. Jensen, R. Bast, T. Saue, and L. Visscher,          *
 *    with contributions from V. Bakken, K. G. Dyall, S. Dubillard,            *
 *    U. Ekstroem, E. Eliav, T. Enevoldsen, T. Fleig, O. Fossgaard,            *
 *    A. S. P. Gomes, T. Helgaker, J. K. Laerdahl, Y. S. Lee, J. Henriksson,   *
 *    M. Ilias, Ch. R. Jacob, S. Knecht, S. Komorovsky, O. Kullie,             *
 *    C. V. Larsen, H. S. Nataraj, P. Norman, G. Olejniczak, J. Olsen,         *
 *    Y. C. Park, J. K. Pedersen, M. Pernpointner, K. Ruud, P. Salek,          *
 *    B. Schimmelpfennig, J. Sikkema, A. J. Thorvaldsen, J. Thyssen,           *
 *    J. van Stralen, S. Villaume, O. Visser, T. Winther, and S. Yamamoto      *
 *    (see http://www.diracprogram.org).                                       *
 *                                                                             *
 *******************************************************************************


Binary information
------------------

 Who compiled             | loyt...@taygeta.oulu.fi
 System                   | Linux-2.6.32-279.22.1.el6.x86_64
 CMake generator          | Unix Makefiles
 Processor                | x86_64
 64-bit integers          | ON
 MPI                      | ON
 Fortran compiler         | /export/openmpi/1.4.4/intel-i8/bin/mpif90
 Fortran compiler version | ifort (IFORT) 12.0.5 20110719
 Fortran flags            | -w -assume byterecl -DVAR_IFORT -g -traceback -i8 
                          |  -O3 -ip -xHost -mkl=sequential
 C compiler               | /export/openmpi/1.4.4/intel-i8/bin/mpicc
 C compiler version       | icc (ICC) 12.0.5 20110719
 C flags                  | -g -wd981 -wd279 -wd383 -vec-report0 -wd1572 -wd17
                          | 7 -O2 -xHost -mkl=sequential
 C++ compiler             | /export/openmpi/1.4.4/intel-i8/bin/mpic++
 C++ compiler version     | unknown
 C++ flags                | -Wno-unknown-pragmas -debug -O3 -DNDEBUG
 BLAS                     | -Wl,--start-group;-limf;/export/intel/composerxe-2
                          | 011.5.220/mkl/lib/intel64/libmkl_core.so;/export/i
                          | ntel/composerxe-2011.5.220/mkl/lib/intel64/libmkl_
                          | intel_thread.so;/usr/lib64/libpthread.so;/usr/lib6
                          | 4/libm.so;/export/intel/composerxe-2011.5.220/mkl/
                          | lib/intel64/libmkl_intel_ilp64.so;-openmp;-Wl,--en
                          | d-group
 LAPACK                   | -Wl,--start-group;/export/intel/composerxe-2011.5.
                          | 220/mkl/lib/intel64/libmkl_lapack95_ilp64.a;-Wl,--
                          | end-group
 Static linking           | OFF
 Last Git revision        | 66db3aee4f2cd1820a9f5ca2561ffad377c282b2
 Configuration time       | 2013-02-22 18:42:26.066437


Execution time and host
-----------------------

 
     Date and time (Linux)  : Mon Feb 25 14:07:25 2013
     Host name              : taygeta.oulu.fi                         


Contents of the input file
--------------------------

**DIRAC                                                                                             
.TITLE                                                                                              
 Testing KR-CI for Be in Pierloot basis                                                             
.WAVE FUNCTION                                                                                      
**HAMILTONIAN                                                                                       
.DOSSSS                                                                                             
**WAVE FUNCTION                                                                                     
.SCF                                                                                                
.KR CI                                                                                              
*SCF                                                                                                
.CLOSED SHELL                                                                                       
 4 0                                                                                                
*KRCICALC                                                                                           
.CI PROGRAM                                                                                         
LUCIAREL                                                                                            
.INACTIVE                                                                                           
 0 0                                                                                                
.GASSH                                                                                              
 3                                                                                                  
 2 0                                                                                                
 1 2                                                                                                
12 6                                                                                                
.GASSPC                                                                                             
 0 4                                                                                                
 2 4                                                                                                
 4 4                                                                                                
.MK2REF                                                                                             
 0                                                                                                  
.MK2DEL                                                                                             
 4                                                                                                  
.CIROOTS                                                                                            
 1  1                                                                                               
.MAX CI                                                                                             
 16                                                                                                 
.MXCIVE                                                                                             
 4                                                                                                  
.NOOCCN                                                                                             
**END OF                                                                                            


Contents of the molecule file
-----------------------------

INTGRL                                                                                              
 Be atom in uncontracted Pierloot basis set, MOLCAS 5, ANO-S                                        
 Generated small component via RKB                                                                  
C   1   3   Z  Y  X                                                                                 
        4.    1                                                                                     
Be 1           .00000000           .00000000           .00000000                                    
LARGE    3    2    1    1                                                                           
H   7    0    3                                                                                     
      2732.3281                                                                                     
       410.31981                                                                                    
        93.672648                                                                                   
        26.587957                                                                                   
         8.6295600                                                                                  
         3.0562640                                                                                  
         1.1324240                                                                                  
H   3    0    3                                                                                     
          .18173200                                                                                 
          .05917000                                                                                 
          .02071000                                                                                 
H   4    0    3                                                                                     
         1.1677000                                                                                  
          .36500000                                                                                 
          .11410000                                                                                 
          .03570000                                                                                 
H   3    0    3                                                                                     
          .54680000                                                                                 
          .14650000                                                                                 
          .03930000                                                                                 
FINISH                                                                                              


    *************************************************************************
    ****************  Testing KR-CI for Be in Pierloot basis ****************
    *************************************************************************

 Jobs in this run:
   * Wave function


    **************************************************************************
    ************************** General DIRAC set-up **************************
    **************************************************************************

   CODATA Recommended Values of the Fundamental Physical Constants: 1998  
                Peter J. Mohr and Barry N. Taylor                         
   Journal of Physical and Chemical Reference Data, Vol. 28, No. 6, 1999  
 * The speed of light :        137.0359998
 * Running in four-component mode
 * Direct evaluation of the following two-electron integrals:
   - LL-integrals
   - SL-integrals
   - SS-integrals
   - GT-integrals
 * Spherical transformation embedded in MO-transformation
   for large components
 * Transformation to scalar RKB basis embedded in
   MO-transformation for small components
 * Thresholds for linear dependence:
   Large components:   1.00D-06
   Small components:   1.00D-08
 * General print level   :   0


    *************************************************************************
    ****************** Output from HERMIT input processing ******************
    *************************************************************************



    *************************************************************************
    ****************** Output from READIN input processing ******************
    *************************************************************************



  Title Cards
  -----------

   Be atom in uncontracted Pierloot basis set, MOLCAS 5, ANO-S            
   Generated small component via RKB                                      

  Nuclear Gaussian exponent for atom of charge   4.000 :    7.8788802914D+08


  Symmetry Operations
  -------------------

  Symmetry operations: 3



                          SYMGRP:Point group information
                          ------------------------------

Point group: D2h

   * The point group was generated by:

      Reflection in the xy-plane
      Reflection in the xz-plane
      Reflection in the yz-plane

   * Group multiplication table

        |  E   C2z  C2y  C2x   i   Oxy  Oxz  Oyz
   -----+----------------------------------------
     E  |  E   C2z  C2y  C2x   i   Oxy  Oxz  Oyz
    C2z | C2z   E   C2x  C2y  Oxy   i   Oyz  Oxz
    C2y | C2y  C2x   E   C2z  Oxz  Oyz   i   Oxy
    C2x | C2x  C2y  C2z   E   Oyz  Oxz  Oxy   i 
     i  |  i   Oxy  Oxz  Oyz   E   C2z  C2y  C2x
    Oxy | Oxy   i   Oyz  Oxz  C2z   E   C2x  C2y
    Oxz | Oxz  Oyz   i   Oxy  C2y  C2x   E   C2z
    Oyz | Oyz  Oxz  Oxy   i   C2x  C2y  C2z   E 

   * Character table

        |  E   C2z  C2y  C2x   i   Oxy  Oxz  Oyz
   -----+----------------------------------------
    Ag  |   1    1    1    1    1    1    1    1
    B1u |   1    1   -1   -1   -1   -1    1    1
    B2u |   1   -1    1   -1   -1    1   -1    1
    B3g |   1   -1   -1    1    1   -1   -1    1
    B3u |   1   -1   -1    1   -1    1    1   -1
    B2g |   1   -1    1   -1    1   -1    1   -1
    B1g |   1    1   -1   -1    1    1   -1   -1
    Au  |   1    1    1    1   -1   -1   -1   -1

   * Direct product table

        | Ag   B1u  B2u  B3g  B3u  B2g  B1g  Au 
   -----+----------------------------------------
    Ag  | Ag   B1u  B2u  B3g  B3u  B2g  B1g  Au 
    B1u | B1u  Ag   B3g  B2u  B2g  B3u  Au   B1g
    B2u | B2u  B3g  Ag   B1u  B1g  Au   B3u  B2g
    B3g | B3g  B2u  B1u  Ag   Au   B1g  B2g  B3u
    B3u | B3u  B2g  B1g  Au   Ag   B1u  B2u  B3g
    B2g | B2g  B3u  Au   B1g  B1u  Ag   B3g  B2u
    B1g | B1g  Au   B3u  B2g  B2u  B3g  Ag   B1u
    Au  | Au   B1g  B2g  B3u  B3g  B2u  B1u  Ag 


                            **************************
                            *** Output from DBLGRP ***
                            **************************

   * Two fermion irreps:  E1g  E1u
   * Real group. NZ = 1
   * Direct product decomposition:
          E1g x E1g : Ag  + B1g + B2g + B3g
          E1u x E1g : Au  + B1u + B2u + B3u
          E1u x E1u : Ag  + B1g + B2g + B3g


                                 Spinor structure
                                 ----------------


   * Fermion irrep no.: 1            * Fermion irrep no.: 2

      La  |  Ag (1)  B1g(2)  |                La  |  Au (1)  B1u(2)  |
      Sa  |  Au (1)  B1u(2)  |                Sa  |  Ag (1)  B1g(2)  |
      Lb  |  B2g(3)  B3g(4)  |                Lb  |  B2u(3)  B3u(4)  |
      Sb  |  B2u(3)  B3u(4)  |                Sb  |  B2g(3)  B3g(4)  |


                              Quaternion symmetries
                              ---------------------

    Rep  T(+)
    -----------------------------
    Ag   1
    B1u  i
    B2u  j
    B3g  k
    B3u  k
    B2g  j
    B1g  i
    Au   1


  Atoms and basis sets
  --------------------

  Number of atom types:     1
  Total number of atoms:    1

  label    atoms   charge   prim    cont     basis   
  ----------------------------------------------------------------------
  Be 1        1       4      40      40      L  - [10s4p3d|10s4p3d]                                              
                             97      97      S  - [4s13p4d3f|4s13p4d3f]                                          
  ----------------------------------------------------------------------
                             40      40      L  - large components
                             97      97      S  - small components
  ----------------------------------------------------------------------
  total:      1       4     137     137

  Cartesian basis used.
  Threshold for integrals (to be written to file):  1.00D-15


  References for the basis sets
  -----------------------------

  Atom type   1
  Basis set typed explicitly in input file                                        


  Cartesian Coordinates
  ---------------------

  Total number of coordinates:  3


   1   Be 1     x      0.0000000000
   2            y      0.0000000000
   3            z      0.0000000000



  Cartesian coordinates xyz format (angstrom)
  -------------------------------------------

    1
 
Be 1   0.0000000000   0.0000000000   0.0000000000


  Symmetry Coordinates
  --------------------

  Number of coordinates in each symmetry:   0  1  1  0  1  0  0  0


  Symmetry 2

   1   Be 1  z    3


  Symmetry 3

   2   Be 1  y    2


  Symmetry 5

   3   Be 1  x    1
  Nuclear repulsion energy :    0.000000000000


                                GETLAB: AO-labels
                                -----------------

   * Large components:   10
     1  L Be  1 s        2  L Be  1 px       3  L Be  1 py       4  L Be  1 pz       5  L Be  1 dxx      6  L Be  1 dxy 
     7  L Be  1 dxz      8  L Be  1 dyy      9  L Be  1 dyz     10  L Be  1 dzz 
   * Small components:   20
    11  S Be  1 s       12  S Be  1 px      13  S Be  1 py      14  S Be  1 pz      15  S Be  1 dxx     16  S Be  1 dxy 
    17  S Be  1 dxz     18  S Be  1 dyy     19  S Be  1 dyz     20  S Be  1 dzz     21  S Be  1 fxxx    22  S Be  1 fxxy
    23  S Be  1 fxxz    24  S Be  1 fxyy    25  S Be  1 fxyz    26  S Be  1 fxzz    27  S Be  1 fyyy    28  S Be  1 fyyz
    29  S Be  1 fyzz    30  S Be  1 fzzz


                                GETLAB: SO-labels
                                -----------------

   * Large components:   10
     1  L Ag Be s        2  L Ag Be dxx      3  L Ag Be dyy      4  L Ag Be dzz      5  L B1uBe pz       6  L B2uBe py  
     7  L B3gBe dyz      8  L B3uBe px       9  L B2gBe dxz     10  L B1gBe dxy 
   * Small components:   20
    11  S Ag Be s       12  S Ag Be dxx     13  S Ag Be dyy     14  S Ag Be dzz     15  S B1uBe pz      16  S B1uBe fxxz
    17  S B1uBe fyyz    18  S B1uBe fzzz    19  S B2uBe py      20  S B2uBe fxxy    21  S B2uBe fyyy    22  S B2uBe fyzz
    23  S B3gBe dyz     24  S B3uBe px      25  S B3uBe fxxx    26  S B3uBe fxyy    27  S B3uBe fxzz    28  S B2gBe dxz 
    29  S B1gBe dxy     30  S Au Be fxyz


  Symmetry Orbitals
  -----------------

  Number of orbitals in each symmetry:           35    26    26     7    26     7     7     3
  Number of large orbitals in each symmetry:     19     4     4     3     4     3     3     0
  Number of small orbitals in each symmetry:     16    22    22     4    22     4     4     3

* Large component functions

  Symmetry  Ag ( 1)

       10 functions:    Be s   
        3 functions:    Be dxx 
        3 functions:    Be dyy 
        3 functions:    Be dzz 

  Symmetry  B1u( 2)

        4 functions:    Be pz  

  Symmetry  B2u( 3)

        4 functions:    Be py  

  Symmetry  B3g( 4)

        3 functions:    Be dyz 

  Symmetry  B3u( 5)

        4 functions:    Be px  

  Symmetry  B2g( 6)

        3 functions:    Be dxz 

  Symmetry  B1g( 7)

        3 functions:    Be dxy 

* Small component functions

  Symmetry  Ag ( 1)

        4 functions:    Be s   
        4 functions:    Be dxx 
        4 functions:    Be dyy 
        4 functions:    Be dzz 

  Symmetry  B1u( 2)

       13 functions:    Be pz  
        3 functions:    Be fxxz
        3 functions:    Be fyyz
        3 functions:    Be fzzz

  Symmetry  B2u( 3)

       13 functions:    Be py  
        3 functions:    Be fxxy
        3 functions:    Be fyyy
        3 functions:    Be fyzz

  Symmetry  B3g( 4)

        4 functions:    Be dyz 

  Symmetry  B3u( 5)

       13 functions:    Be px  
        3 functions:    Be fxxx
        3 functions:    Be fxyy
        3 functions:    Be fxzz

  Symmetry  B2g( 6)

        4 functions:    Be dxz 

  Symmetry  B1g( 7)

        4 functions:    Be dxy 

  Symmetry  Au ( 8)

        3 functions:    Be fxyz


   ***************************************************************************
   *************************** Hamiltonian defined ***************************
   ***************************************************************************

 * Print level:    0
 * Dirac-Coulomb Hamiltonian
 * Default integral flags passed to all modules
   - LL-integrals:     1
   - LS-integrals:     1
   - SS-integrals:     1
   - GT-integrals:     0
 * Basis set:
   - uncontracted large component basis set
   - uncontracted small component basis set


 Information about the restricted kinetic balance scheme:
 * Default RKB projection:
   1: Pre-projection in scalar basis
   2: Removal of unphysical solutions (via diagonalization of free particle Hamiltonian)


    **************************************************************************
    ************************** Wave function module **************************
    **************************************************************************

 Jobs in this run (in execution order):
 * Hartree-Fock calculation
 * Kramers restricted CI calculation
===========================================================================
 SCFINP: Set-up for Hartree-Fock calculation:
===========================================================================
 * Number of fermion irreps: 2
 * Closed shell SCF calculation with     4 electrons in
       2 orbitals in Fermion irrep 1 and    0 orbitals in Fermion irrep 2
 * Bare nucleus screening correction used for start guess
 * General print level   :   0
 ***** INITIAL TRIAL SCF FUNCTION *****
 * Trial vectors read from file DFCOEF
 ***** SCF CONVERGENCE CRITERIA *****
 * Convergence on norm of error vector (gradient).
   Desired convergence:1.000D-07
   Allowed convergence:1.000D-06

 ***** CONVERGENCE CONTROL *****
 * Fock matrix constructed using differential density matrix
    with optimal parameter.
 * DIIS (in MO basis)
 * DIIS will be activated when convergence reaches : 1.00D+20
   - Maximum size of B-matrix:   10
 * Damping of Fock matrix when DIIS is not activated. 
   Weight of old matrix    : 0.250
 * Maximum number of SCF iterations  :   50
 * No quadratic convergent Hartree-Fock
 * Contributions from 2-electron integrals to Fock matrix:
   LL-integrals.
   SL-integrals from iteration    1
   SS-integrals from iteration    1
    ---> this is default setting from Hamiltonian input
 ***** OUTPUT CONTROL *****
 * Only electron eigenvalues written out.
===========================================================================
 *KRCICALC: General set-up for KR-CI calculation:
===========================================================================
 * Inactive orbitals     :    0   0
 * Active orbitals       :   15   8
 * Active electrons      :    4
 * GAS space setup for   3 GAS space(s) : 
   - GAS space   1       :    2   0
    (constraints: min/max active electrons after space :   0/  4)
   - GAS space   2       :    1   2
    (constraints: min/max active electrons after space :   2/  4)
   - GAS space   3       :   12   6
    (constraints: min/max active electrons after space :   4/  4)
 * CI program used       : LUCIAREL
 * optimization of wave function(s) in the following symmetries:
    **   1 eigenstate(s) in symmetry (boson):  ag
    -- Allowed interval of 2 * MK :  -4 to   4
 * Using symmetry nomenclature for LUCIAREL. 
      Boson and Fermion irreps of complex (sub)groups  

 * Contributions from 2-electron integrals to Fock matrix:
   LL-integrals.
   SL-integrals from iteration    0
   SS-integrals from iteration    0
 * General print level   :    0
===========================================================================
 Control parameters for KR-CI optimization
===========================================================================
 * Maximum number of CI iterations for each symmetry:  16
 * Maximum subspace dimension set to   4
 * Integrals on slave nodes provided by the MASTER
 * Calculation of nat. orb. occ. numbers


 ********************************************************************************
 *************************** Input consistency checks ***************************
 ********************************************************************************



    *************************************************************************
    ************************ End of input processing ************************
    *************************************************************************



                      Nuclear contribution to dipole moments
                      --------------------------------------

                     All components zero by symmetry


                       Generating Lowdin canonical matrix:
                       -----------------------------------

   L   Ag    * Deleted:          3(Proj:          3, Lindep:          0)
   L   B3g   * Deleted:          0(Proj:          0, Lindep:          0)
   L   B2g   * Deleted:          0(Proj:          0, Lindep:          0)
   L   B1g   * Deleted:          0(Proj:          0, Lindep:          0)
   S   B1u   * Deleted:          3(Proj:          3, Lindep:          0)
   S   B2u   * Deleted:          3(Proj:          3, Lindep:          0)
   S   B3u   * Deleted:          3(Proj:          3, Lindep:          0)
   S   Au    * Deleted:          0(Proj:          0, Lindep:          0)
   L   B1u   * Deleted:          0(Proj:          0, Lindep:          0)
   L   B2u   * Deleted:          0(Proj:          0, Lindep:          0)
   L   B3u   * Deleted:          0(Proj:          0, Lindep:          0)
   S   Ag    * Deleted:          4(Proj:          4, Lindep:          0)
   S   B3g   * Deleted:          0(Proj:          0, Lindep:          0)
   S   B2g   * Deleted:          0(Proj:          0, Lindep:          0)
   S   B1g   * Deleted:          0(Proj:          0, Lindep:          0)


                                Output from MODHAM
                                ------------------

* Applied strict kinetic balance !


      **********************************************************************
      ************************* Orbital dimensions *************************
      **********************************************************************

                                   Irrep 1 Irrep 2   Sum
No. of electronic orbitals (NESH):    25      12      37
No. of positronic orbitals (NPSH):    25      12      37
Total no. of orbitals      (NORB):    50      24      74
 >>> Time used in PAMSET is   0.18 seconds


   ****************************************************************************
   ************************* Hartree-Fock calculation *************************
   ****************************************************************************


*** INFO *** No trial vectors found. Using bare nucleus approximation for initial trial vectors.
             Improved by an estimate of the electronic screening (Slater's rules).


########## START ITERATION NO.   1 ##########   Mon Feb 25 14:07:25 2013


=> Calculating sum of orbital energies
It.    1    -8.232779292418      0.00D+00  0.00D+00  0.00D+00               0.03499500s   Scr. nuclei    Mon Feb 25

########## START ITERATION NO.   2 ##########   Mon Feb 25 14:07:25 2013


* GETGAB: label "GABAO1XX" not found; calling GABGEN.
SCR        scr.thr.    Step1    Step2  Coulomb  Exchange    CPU-time
SOfock:LL  1.00D-12   68.18%    9.19%    4.54%    5.76%   0.01599699s
SOfock:SL  1.00D-12   59.08%   13.75%    4.50%   10.40%   0.08298796s
SOfock:SS  1.00D-12   59.21%   15.35%    3.75%   10.81%   0.15397602s
>>> Total wall time: 0.00000000s
>>> Total CPU time : 0.70889202s

########## END ITERATION NO.   2 ##########   Mon Feb 25 14:07:25 2013

It.    2    -14.52899342570      6.30D+00  3.60D+00  4.11D-01               0.70889202s   LL SL SS       Mon Feb 25

########## START ITERATION NO.   3 ##########   Mon Feb 25 14:07:25 2013

    3 *** Differential density matrix. DCOVLP     = 0.8979
SCR        scr.thr.    Step1    Step2  Coulomb  Exchange    CPU-time
SOfock:LL  1.00D-12   68.18%    9.45%    5.30%    5.78%   0.01599801s
SOfock:SL  1.00D-12   61.01%   13.09%    4.26%   10.07%   0.08298695s
SOfock:SS  1.00D-12   59.21%   18.07%    3.62%   10.95%   0.15597701s
>>> Total wall time: 0.00000000s
>>> Total CPU time : 0.47992796s

########## END ITERATION NO.   3 ##########   Mon Feb 25 14:07:25 2013

It.    3    -14.57222947676      4.32D-02  9.08D-02  4.47D-02   DIIS   2    0.47992796s   LL SL SS       Mon Feb 25

########## START ITERATION NO.   4 ##########   Mon Feb 25 14:07:25 2013

    4 *** Differential density matrix. DCOVLP     = 1.0547
SCR        scr.thr.    Step1    Step2  Coulomb  Exchange    CPU-time
SOfock:LL  1.00D-12   68.18%    9.48%    4.42%    5.81%   0.01399803s
SOfock:SL  1.00D-12   59.08%   15.74%    4.15%   10.27%   0.08498704s
SOfock:SS  1.00D-12   59.21%   19.99%    3.53%   10.85%   0.15597701s
>>> Total wall time: 0.00000000s
>>> Total CPU time : 0.45793009s

########## END ITERATION NO.   4 ##########   Mon Feb 25 14:07:25 2013

It.    4    -14.57501582137      2.79D-03  1.73D-02  1.05D-02   DIIS   3    0.45793009s   LL SL SS       Mon Feb 25

########## START ITERATION NO.   5 ##########   Mon Feb 25 14:07:25 2013

    5 *** Differential density matrix. DCOVLP     = 1.0294
SCR        scr.thr.    Step1    Step2  Coulomb  Exchange    CPU-time
SOfock:LL  1.00D-12   68.18%    9.50%    4.33%    6.59%   0.01599801s
SOfock:SL  1.00D-12   61.01%   14.65%    3.91%   10.01%   0.08298695s
SOfock:SS  1.00D-12   59.21%   21.36%    3.44%   10.80%   0.15597606s
>>> Total wall time: 1.00000000s
>>> Total CPU time : 0.46292984s

########## END ITERATION NO.   5 ##########   Mon Feb 25 14:07:25 2013

It.    5    -14.57522993150      2.14D-04  5.83D-03  6.57D-04   DIIS   4    0.46292984s   LL SL SS       Mon Feb 25

########## START ITERATION NO.   6 ##########   Mon Feb 25 14:07:26 2013

    6 *** Differential density matrix. DCOVLP     = 1.0021
SCR        scr.thr.    Step1    Step2  Coulomb  Exchange    CPU-time
SOfock:LL  1.00D-12   68.18%    9.90%    2.33%    4.55%   0.01599813s
SOfock:SL  1.00D-12   61.01%   17.25%    2.90%    9.93%   0.08098793s
SOfock:SS  1.00D-12   59.21%   26.67%    3.21%   10.83%   0.15297604s
>>> Total wall time: 0.00000000s
>>> Total CPU time : 0.42793489s

########## END ITERATION NO.   6 ##########   Mon Feb 25 14:07:26 2013

It.    6    -14.57523072489      7.93D-07  3.99D-04  1.08D-05   DIIS   5    0.42793489s   LL SL SS       Mon Feb 25

########## START ITERATION NO.   7 ##########   Mon Feb 25 14:07:26 2013

    7 *** Differential density matrix. DCOVLP     = 1.0000
SCR        scr.thr.    Step1    Step2  Coulomb  Exchange    CPU-time
SOfock:LL  1.00D-12   68.18%   13.22%    2.84%    6.71%   0.01399803s
SOfock:SL  1.00D-12   61.01%   25.04%    1.99%    9.70%   0.07998800s
SOfock:SS  1.00D-12   59.22%   38.22%    3.73%   13.91%   0.14397788s
>>> Total wall time: 0.00000000s
>>> Total CPU time : 0.43593407s

########## END ITERATION NO.   7 ##########   Mon Feb 25 14:07:26 2013

It.    7    -14.57523072490      1.32D-11 -2.81D-06  4.81D-07   DIIS   6    0.43593407s   LL SL SS       Mon Feb 25

########## START ITERATION NO.   8 ##########   Mon Feb 25 14:07:26 2013

    8 *** Differential density matrix. DCOVLP     = 1.0000
SCR        scr.thr.    Step1    Step2  Coulomb  Exchange    CPU-time
SOfock:LL  1.00D-12   55.62%   24.49%    1.43%    6.75%   0.01499796s
SOfock:SL  1.00D-12   59.99%   27.28%    1.17%   11.15%   0.08098817s
SOfock:SS  1.00D-12   66.14%   32.35%    0.89%   16.82%   0.12997985s
>>> Total wall time: 0.00000000s
>>> Total CPU time : 0.44693184s

########## END ITERATION NO.   8 ##########   Mon Feb 25 14:07:26 2013

It.    8    -14.57523072490      2.01D-13 -2.54D-08  4.00D-08   DIIS   6    0.44693184s   LL SL SS       Mon Feb 25


                                   SCF - CYCLE
                                   -----------

* Convergence on norm of error vector (gradient).
  Desired convergence:1.000D-07
  Allowed convergence:1.000D-06

* ERGVAL - convergence in total energy
* FCKVAL - convergence in maximum change in total Fock matrix
* EVCVAL - convergence in error vector (gradient)
--------------------------------------------------------------------------------------------------------------------------------
           Energy               ERGVAL    FCKVAL    EVCVAL      Conv.acc    CPU          Integrals   Time stamp
--------------------------------------------------------------------------------------------------------------------------------
It.    1    -8.232779292418      0.00D+00  0.00D+00  0.00D+00               0.03499500s   Scr. nuclei    Mon Feb 25
It.    2    -14.52899342570      6.30D+00  3.60D+00  4.11D-01               0.70889202s   LL SL SS       Mon Feb 25
It.    3    -14.57222947676      4.32D-02  9.08D-02  4.47D-02   DIIS   2    0.47992796s   LL SL SS       Mon Feb 25
It.    4    -14.57501582137      2.79D-03  1.73D-02  1.05D-02   DIIS   3    0.45793009s   LL SL SS       Mon Feb 25
It.    5    -14.57522993150      2.14D-04  5.83D-03  6.57D-04   DIIS   4    0.46292984s   LL SL SS       Mon Feb 25
It.    6    -14.57523072489      7.93D-07  3.99D-04  1.08D-05   DIIS   5    0.42793489s   LL SL SS       Mon Feb 25
It.    7    -14.57523072490      1.32D-11 -2.81D-06  4.81D-07   DIIS   6    0.43593407s   LL SL SS       Mon Feb 25
It.    8    -14.57523072490      2.01D-13 -2.54D-08  4.00D-08   DIIS   6    0.44693184s   LL SL SS       Mon Feb 25
--------------------------------------------------------------------------------------------------------------------------------
* Convergence after    8 iterations.
* Average elapsed time per iteration: 
      No 2-ints    :    0.00000000s
      LL SL SS     :    0.14285714s


                                   TOTAL ENERGY
                                   ------------

   Electronic energy                        :    -14.575230724900800

   Other contributions to the total energy
   Nuclear repulsion energy                 :      0.000000000000000

   Sum of all contributions to the energy
   Total energy                             :    -14.575230724900800


                                   Eigenvalues
                                   -----------


* Fermion symmetry E1g
  * Closed shell, f = 1.0000
   -4.733841030378  ( 2)      -0.309392485530  ( 2)
  * Virtual eigenvalues, f = 0.0000
    0.053147916577  ( 2)       0.127262909578  (10)       0.368678822441  ( 2)       0.487139292545  ( 4)       0.487141876846  ( 6)
    1.743326946721  ( 4)       1.743370774350  ( 6)       3.024969633802  ( 2)      14.639093449121  ( 2)      54.810363044882  ( 2)
  204.279713561940  ( 2)     860.023309586417  ( 2)    4657.887332378923  ( 2)

* Fermion symmetry E1u
  * Virtual eigenvalues, f = 0.0000
    0.041259229733  ( 2)       0.041263199674  ( 4)       0.183530076956  ( 2)       0.183542739865  ( 4)       0.735376144871  ( 2)
    0.735418849408  ( 4)       2.597027086965  ( 2)       2.597280907543  ( 4)
* HOMO - LUMO gap:

    E(LUMO) :     0.04125923 au (symmetry E1u)
  - E(HOMO) :    -0.30939249 au (symmetry E1g)
  ------------------------------------------
    gap     :     0.35065172 au



 *******************************************************************************
 ****************************** KR-CI calculation ******************************
 *******************************************************************************

   This is output from DIRAC KR-CI
   - a relativistic four-component CI wave function program.


   General structure: 
     Stefan Knecht and Hans Joergen Aa. Jensen 

   Integral transformation: 
     Luuk Visscher, Jon K. Laerdahl, and Trond Saue

   GASCIP CI code: 
     Joern Thyssen and Hans Joergen Aa. Jensen

   LUCIAREL CI code: 
     Timo Fleig and Jeppe Olsen

   Parallel LUCIAREL CI code: 
     Stefan Knecht, Hans Joergen Aa. Jensen and Timo Fleig

   Linear symmetry implementation (CI and MCSCF):
     Stefan Knecht and Hans Joergen Aa. Jensen

 *******************************************************************************

 This module is published in:

    GASCIP: J Thyssen, T Fleig, and H J Aa Jensen
                       J Chem Phys 129, 034109 (2008), suppl. material.
    DIRAC-LUCIAREL: T Fleig, J Olsen, and L Visscher
                       J Chem Phys, 119,6 (2003) 2963
    PARALLEL LUCIAREL:
                    S. Knecht, H J Aa Jensen, and T Fleig
                       J Chem Phys, 132,1 (2010) 014108

 *******************************************************************************


 ====  below this line is the stderr stream  ====
*** The MPI_Type_f2c() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[taygeta.oulu.fi:14824] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!



And here's ompi_info.out:


                 Package: Open MPI r...@taygeta.oulu.fi Distribution
                Open MPI: 1.4.4
   Open MPI SVN revision: r25188
   Open MPI release date: Sep 27, 2011
                Open RTE: 1.4.4
   Open RTE SVN revision: r25188
   Open RTE release date: Sep 27, 2011
                    OPAL: 1.4.4
       OPAL SVN revision: r25188
       OPAL release date: Sep 27, 2011
            Ident string: 1.4.4
           MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.4)
              MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4.4)
           MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.4)
               MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.4.4)
               MCA carto: file (MCA v2.0, API v2.0, Component v1.4.4)
           MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.4)
               MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.4)
         MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.4)
         MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.4)
              MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.4)
           MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.4)
           MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.4)
                MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.4)
                MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.4)
                MCA coll: inter (MCA v2.0, API v2.0, Component v1.4.4)
                MCA coll: self (MCA v2.0, API v2.0, Component v1.4.4)
                MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.4)
                MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.4)
                MCA coll: tuned (MCA v2.0, API v2.0, Component v1.4.4)
                  MCA io: romio (MCA v2.0, API v2.0, Component v1.4.4)
               MCA mpool: fake (MCA v2.0, API v2.0, Component v1.4.4)
               MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.4)
               MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA pml: cm (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA pml: csum (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA pml: v (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4.4)
              MCA rcache: vma (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA btl: ofud (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA btl: self (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA btl: sm (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4.4)
                MCA topo: unity (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA osc: rdma (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA iof: hnp (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA iof: orted (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA iof: tool (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA oob: tcp (MCA v2.0, API v2.0, Component v1.4.4)
                MCA odls: default (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA ras: slurm (MCA v2.0, API v2.0, Component v1.4.4)
               MCA rmaps: load_balance (MCA v2.0, API v2.0, Component v1.4.4)
               MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.4.4)
               MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.4.4)
               MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA rml: oob (MCA v2.0, API v2.0, Component v1.4.4)
              MCA routed: binomial (MCA v2.0, API v2.0, Component v1.4.4)
              MCA routed: direct (MCA v2.0, API v2.0, Component v1.4.4)
              MCA routed: linear (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA plm: rsh (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA plm: slurm (MCA v2.0, API v2.0, Component v1.4.4)
               MCA filem: rsh (MCA v2.0, API v2.0, Component v1.4.4)
              MCA errmgr: default (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA ess: env (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA ess: hnp (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA ess: singleton (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA ess: slurm (MCA v2.0, API v2.0, Component v1.4.4)
                 MCA ess: tool (MCA v2.0, API v2.0, Component v1.4.4)
             MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.4.4)
             MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.4.4)
                  Prefix: /export/openmpi/1.4.4/intel-i8
             Exec_prefix: /export/openmpi/1.4.4/intel-i8
                  Bindir: /export/openmpi/1.4.4/intel-i8/bin
                 Sbindir: /export/openmpi/1.4.4/intel-i8/sbin
                  Libdir: /export/openmpi/1.4.4/intel-i8/lib
                  Incdir: /export/openmpi/1.4.4/intel-i8/include
                  Mandir: /export/openmpi/1.4.4/intel-i8/share/man
               Pkglibdir: /export/openmpi/1.4.4/intel-i8/lib/openmpi
              Libexecdir: /export/openmpi/1.4.4/intel-i8/libexec
             Datarootdir: /export/openmpi/1.4.4/intel-i8/share
                 Datadir: /export/openmpi/1.4.4/intel-i8/share
              Sysconfdir: /export/openmpi/1.4.4/intel-i8/etc
          Sharedstatedir: /export/openmpi/1.4.4/intel-i8/com
           Localstatedir: /export/openmpi/1.4.4/intel-i8/var
                 Infodir: /export/openmpi/1.4.4/intel-i8/share/info
              Pkgdatadir: /export/openmpi/1.4.4/intel-i8/share/openmpi
               Pkglibdir: /export/openmpi/1.4.4/intel-i8/lib/openmpi
           Pkgincludedir: /export/openmpi/1.4.4/intel-i8/include/openmpi
 Configured architecture: x86_64-unknown-linux-gnu
          Configure host: taygeta.oulu.fi
           Configured by: rar
           Configured on: Fri Feb 22 11:43:57 EET 2013
          Configure host: taygeta.oulu.fi
                Built by: rar
                Built on: Fri Feb 22 11:57:36 EET 2013
              Built host: taygeta.oulu.fi
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (all)
      Fortran90 bindings: yes
 Fortran90 bindings size: small
              C compiler: icc
     C compiler absolute: /export/intel/composerxe-2011.5.220/bin/intel64/icc
             C char size: 1
             C bool size: 1
            C short size: 2
              C int size: 4
             C long size: 8
            C float size: 4
           C double size: 8
          C pointer size: 8
            C char align: 1
            C bool align: 1
             C int align: 4
           C float align: 4
          C double align: 8
            C++ compiler: icpc
   C++ compiler absolute: /export/intel/composerxe-2011.5.220/bin/intel64/icpc
      Fortran77 compiler: ifort
  Fortran77 compiler abs: /export/intel/composerxe-2011.5.220/bin/intel64/ifort
      Fortran90 compiler: ifort
  Fortran90 compiler abs: /export/intel/composerxe-2011.5.220/bin/intel64/ifort
       Fort integer size: 8
       Fort logical size: 8
 Fort logical value true: -1
      Fort have integer1: yes
      Fort have integer2: yes
      Fort have integer4: yes
      Fort have integer8: yes
     Fort have integer16: no
         Fort have real4: yes
         Fort have real8: yes
        Fort have real16: no
      Fort have complex8: yes
     Fort have complex16: yes
     Fort have complex32: no
      Fort integer1 size: 1
      Fort integer2 size: 2
      Fort integer4 size: 4
      Fort integer8 size: 8
     Fort integer16 size: -1
          Fort real size: 4
         Fort real4 size: 4
         Fort real8 size: 8
        Fort real16 size: 16
      Fort dbl prec size: 4
          Fort cplx size: 4
      Fort dbl cplx size: 4
         Fort cplx8 size: 8
        Fort cplx16 size: 16
        Fort cplx32 size: 32
      Fort integer align: 1
     Fort integer1 align: 1
     Fort integer2 align: 1
     Fort integer4 align: 1
     Fort integer8 align: 1
    Fort integer16 align: -1
         Fort real align: 1
        Fort real4 align: 1
        Fort real8 align: 1
       Fort real16 align: 1
     Fort dbl prec align: 1
         Fort cplx align: 1
     Fort dbl cplx align: 1
        Fort cplx8 align: 1
       Fort cplx16 align: 1
       Fort cplx32 align: 1
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: yes
          C++ exceptions: no
          Thread support: posix (mpi: no, progress: no)
           Sparse Groups: no
            Build CFLAGS: -O3 -DNDEBUG -finline-functions -fno-strict-aliasing -restrict -pthread -fvisibility=hidden
          Build CXXFLAGS: -O3 -DNDEBUG -finline-functions -pthread
            Build FFLAGS: -i8
           Build FCFLAGS: -i8
           Build LDFLAGS: -export-dynamic  
              Build LIBS: -lnsl -lutil  
    Wrapper extra CFLAGS: -pthread 
  Wrapper extra CXXFLAGS: -pthread 
    Wrapper extra FFLAGS:  
   Wrapper extra FCFLAGS:  
   Wrapper extra LDFLAGS:     
      Wrapper extra LIBS:   -ldl   -Wl,--export-dynamic -lnsl -lutil 
  Internal debug support: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: yes
   Heterogeneous support: no
 mpirun default --prefix: yes
         MPI I/O support: yes
       MPI_WTIME support: gettimeofday
Symbol visibility support: yes
   FT Checkpoint support: no  (checkpoint thread: no)
                 MCA mca: parameter "mca_param_files" (current value: "/home/loytyntu/.openmpi/mca-params.conf:/export/openmpi/1.4.4/intel-i8/etc/openmpi-mca-params.conf", data source: default value)
                          Path for MCA configuration files containing default parameter values
                 MCA mca: parameter "mca_base_param_file_prefix" (current value: <none>, data source: default value)
                          Aggregate MCA parameter file sets
                 MCA mca: parameter "mca_base_param_file_path" (current value: "/export/openmpi/1.4.4/intel-i8/share/openmpi/amca-param-sets:/home/loytyntu/bin/DIRAC-12.2-Source_64", data source: default value)
                          Aggregate MCA parameter Search path
                 MCA mca: parameter "mca_base_param_file_path_force" (current value: <none>, data source: default value)
                          Forced Aggregate MCA parameter Search path
                 MCA mca: parameter "mca_component_path" (current value: "/export/openmpi/1.4.4/intel-i8/lib/openmpi:/home/loytyntu/.openmpi/components", data source: default value)
                          Path where to look for Open MPI and ORTE components
                 MCA mca: parameter "mca_verbose" (current value: <none>, data source: default value)
                          Top-level verbosity parameter
                 MCA mca: parameter "mca_component_show_load_errors" (current value: "1", data source: default value)
                          Whether to show errors for components that failed to load or not
                 MCA mca: parameter "mca_component_disable_dlopen" (current value: "0", data source: default value)
                          Whether to attempt to disable opening dynamic components or not
                 MCA mpi: parameter "mpi_paffinity_alone" (current value: "0", data source: default value, synonym of: opal_paffinity_alone)
                          If nonzero, assume that this job is the only (set of) process(es) running on each node and bind processes to processors, starting with processor ID 0
                 MCA mpi: parameter "mpi_param_check" (current value: "1", data source: default value)
                          Whether you want MPI API parameters checked at run-time or not.  Possible values are 0 (no checking) and 1 (perform checking at run-time)
                 MCA mpi: parameter "mpi_yield_when_idle" (current value: "-1", data source: default value)
                          Yield the processor when waiting for MPI communication (for MPI processes, will default to 1 when oversubscribing nodes)
                 MCA mpi: parameter "mpi_event_tick_rate" (current value: "-1", data source: default value)
                          How often to progress TCP communications (0 = never, otherwise specified in microseconds)
                 MCA mpi: parameter "mpi_show_handle_leaks" (current value: "0", data source: default value)
                          Whether MPI_FINALIZE shows all MPI handles that were not freed or not
                 MCA mpi: parameter "mpi_no_free_handles" (current value: "0", data source: default value)
                          Whether to actually free MPI objects when their handles are freed
                 MCA mpi: parameter "mpi_show_mpi_alloc_mem_leaks" (current value: "0", data source: default value)
                          If >0, MPI_FINALIZE will show up to this many instances of memory allocated by MPI_ALLOC_MEM that was not freed by MPI_FREE_MEM
                 MCA mpi: parameter "mpi_show_mca_params" (current value: <none>, data source: default value)
                          Whether to show all MCA parameter values during MPI_INIT or not (good for reproducability of MPI jobs for debug purposes). Accepted values are all, default, file, api, and enviro - or a comma delimited combination of them
                 MCA mpi: parameter "mpi_show_mca_params_file" (current value: <none>, data source: default value)
                          If mpi_show_mca_params is true, setting this string to a valid filename tells Open MPI to dump all the MCA parameter values into a file suitable for reading via the mca_param_files parameter (good for reproducability of MPI jobs)
                 MCA mpi: parameter "mpi_keep_peer_hostnames" (current value: "1", data source: default value)
                          If nonzero, save the string hostnames of all MPI peer processes (mostly for error / debugging output messages).  This can add quite a bit of memory usage to each MPI process.
                 MCA mpi: parameter "mpi_abort_delay" (current value: "0", data source: default value)
                          If nonzero, print out an identifying message when MPI_ABORT is invoked (hostname, PID of the process that called MPI_ABORT) and delay for that many seconds before exiting (a negative delay value means to never abort).  This allows attaching of a debugger before quitting the job.
                 MCA mpi: parameter "mpi_abort_print_stack" (current value: "0", data source: default value)
                          If nonzero, print out a stack trace when MPI_ABORT is invoked
                 MCA mpi: parameter "mpi_preconnect_mpi" (current value: "0", data source: default value, synonyms: mpi_preconnect_all)
                          Whether to force MPI processes to fully wire-up the MPI connections between MPI processes during MPI_INIT (vs. making connections lazily -- upon the first MPI traffic between each process peer pair)
                 MCA mpi: parameter "mpi_preconnect_all" (current value: "0", data source: default value, deprecated, synonym of: mpi_preconnect_mpi)
                          Whether to force MPI processes to fully wire-up the MPI connections between MPI processes during MPI_INIT (vs. making connections lazily -- upon the first MPI traffic between each process peer pair)
                 MCA mpi: parameter "mpi_leave_pinned" (current value: "-1", data source: default value)
                          Whether to use the "leave pinned" protocol or not.  Enabling this setting can help bandwidth performance when repeatedly sending and receiving large messages with the same buffers over RDMA-based networks (0 = do not use "leave pinned" protocol, 1 = use "leave pinned" protocol, -1 = allow network to choose at runtime).
                 MCA mpi: parameter "mpi_leave_pinned_pipeline" (current value: "0", data source: default value)
                          Whether to use the "leave pinned pipeline" protocol or not.
                 MCA mpi: parameter "mpi_warn_on_fork" (current value: "1", data source: default value)
                          If nonzero, issue a warning if program forks under conditions that could cause system errors
                 MCA mpi: information "mpi_have_sparse_group_storage" (value: "0", data source: default value)
                          Whether this Open MPI installation supports storing of data in MPI groups in "sparse" formats (good for extremely large process count MPI jobs that create many communicators/groups)
                 MCA mpi: parameter "mpi_use_sparse_group_storage" (current value: "0", data source: default value)
                          Whether to use "sparse" storage formats for MPI groups (only relevant if mpi_have_sparse_group_storage is 1)
                MCA orte: parameter "orte_base_help_aggregate" (current value: "1", data source: default value)
                          If orte_base_help_aggregate is true, duplicate help messages will be aggregated rather than displayed individually.  This can be helpful for parallel jobs that experience multiple identical failures; rather than print out the same help/failure message N times, display it once with a count of how many processes sent the same message.
                MCA orte: parameter "orte_tmpdir_base" (current value: <none>, data source: default value)
                          Base of the session directory tree
                MCA orte: parameter "orte_no_session_dirs" (current value: <none>, data source: default value)
                          Prohibited locations for session directories (multiple locations separated by ',', default=NULL)
                MCA orte: parameter "orte_debug" (current value: "0", data source: default value)
                          Top-level ORTE debug switch (default verbosity: 1)
                MCA orte: parameter "orte_debug_verbose" (current value: "-1", data source: default value)
                          Verbosity level for ORTE debug messages (default: 1)
                MCA orte: parameter "orte_debug_daemons" (current value: "0", data source: default value)
                          Whether to debug the ORTE daemons or not
                MCA orte: parameter "orte_debug_daemons_file" (current value: "0", data source: default value)
                          Whether want stdout/stderr of daemons to go to a file or not
                MCA orte: parameter "orte_leave_session_attached" (current value: "0", data source: default value)
                          Whether applications and/or daemons should leave their sessions attached so that any output can be received - this allows X forwarding without all the attendant debugging output
                MCA orte: parameter "orte_do_not_launch" (current value: "0", data source: default value)
                          Perform all necessary operations to prepare to launch the application, but do not actually launch it
                MCA orte: parameter "orte_daemon_spin" (current value: "0", data source: default value)
                          Have any orteds spin until we can connect a debugger to them
                MCA orte: parameter "orte_daemon_fail" (current value: "-1", data source: default value)
                          Have the specified orted fail after init for debugging purposes
                MCA orte: parameter "orte_daemon_fail_delay" (current value: "0", data source: default value)
                          Have the specified orted fail after specified number of seconds (default: 0 => no delay)
                MCA orte: parameter "orte_heartbeat_rate" (current value: "0", data source: default value)
                          Seconds between checks for daemon state-of-health (default: 0 => do not check)
                MCA orte: parameter "orte_startup_timeout" (current value: "0", data source: default value)
                          Milliseconds/daemon to wait for startup before declaring failed_to_start (default: 0 => do not check)
                MCA orte: parameter "orte_timing" (current value: "0", data source: default value)
                          Request that critical timing loops be measured
                MCA orte: parameter "orte_base_user_debugger" (current value: "totalview @mpirun@ -a @mpirun_args@ : ddt -n @np@ -start @executable@ @executable_argv@ @single_app@ : fxp @mpirun@ -a @mpirun_args@", data source: default value)
                          Sequence of user-level debuggers to search for in orterun
                MCA orte: parameter "orte_abort_timeout" (current value: "1", data source: default value)
                          Max time to wait [in secs] before aborting an ORTE operation (default: 1sec)
                MCA orte: parameter "orte_timeout_step" (current value: "1000", data source: default value)
                          Time to wait [in usecs/proc] before aborting an ORTE operation (default: 1000 usec/proc)
                MCA orte: parameter "orte_default_hostfile" (current value: <none>, data source: default value)
                          Name of the default hostfile (relative or absolute path)
                MCA orte: parameter "orte_rankfile" (current value: <none>, data source: default value, synonyms: rmaps_rank_file_path)
                          Name of the rankfile to be used for mapping processes (relative or absolute path)
                MCA orte: parameter "orte_keep_fqdn_hostnames" (current value: "0", data source: default value)
                          Whether or not to keep FQDN hostnames [default: no]
                MCA orte: parameter "orte_contiguous_nodes" (current value: "2147483647", data source: default value)
                          Number of nodes after which contiguous nodename encoding will automatically be used [default: INT_MAX]
                MCA orte: parameter "orte_tag_output" (current value: "0", data source: default value)
                          Tag all output with [job,rank] (default: false)
                MCA orte: parameter "orte_xml_output" (current value: "0", data source: default value)
                          Display all output in XML format (default: false)
                MCA orte: parameter "orte_xml_file" (current value: <none>, data source: default value)
                          Provide all output in XML format to the specified file
                MCA orte: parameter "orte_timestamp_output" (current value: "0", data source: default value)
                          Timestamp all application process output (default: false)
                MCA orte: parameter "orte_output_filename" (current value: <none>, data source: default value)
                          Redirect output from application processes into filename.rank [default: NULL]
                MCA orte: parameter "orte_show_resolved_nodenames" (current value: "0", data source: default value)
                          Display any node names that are resolved to a different name (default: false)
                MCA orte: parameter "orte_hetero_apps" (current value: "0", data source: default value)
                          Indicates that multiple app_contexts are being provided that are a mix of 32/64 bit binaries (default: false)
                MCA orte: parameter "orte_launch_agent" (current value: "orted", data source: default value)
                          Command used to start processes on remote nodes (default: orted)
                MCA orte: parameter "orte_allocation_required" (current value: "0", data source: default value)
                          Whether or not an allocation by a resource manager is required [default: no]
                MCA orte: parameter "orte_xterm" (current value: <none>, data source: default value)
                          Create a new xterm window and display output from the specified ranks there [default: none]
                MCA orte: parameter "orte_forward_job_control" (current value: "0", data source: default value)
                          Forward SIGTSTP (after converting to SIGSTOP) and SIGCONT signals to the application procs [default: no]
                MCA orte: parameter "orte_report_launch_progress" (current value: "0", data source: default value)
                          Output a brief periodic report on launch progress [default: no]
                MCA orte: parameter "orte_num_boards" (current value: "1", data source: default value)
                          Number of processor boards/node (1-256) [default: 1]
                MCA orte: parameter "orte_num_sockets" (current value: "0", data source: default value)
                          Number of sockets/board (1-256)
                MCA orte: parameter "orte_num_cores" (current value: "0", data source: default value)
                          Number of cores/socket (1-256)
                MCA orte: parameter "orte_cpu_set" (current value: <none>, data source: default value)
                          Comma-separated list of ranges specifying logical cpus allocated to this job [default: none]
                MCA orte: parameter "orte_process_binding" (current value: <none>, data source: default value)
                          Policy for binding processes [none | core | socket | board] (supported qualifier: if-avail)
                MCA opal: parameter "opal_net_private_ipv4" (current value: "10.0.0.0/8;172.16.0.0/12;192.168.0.0/16;169.254.0.0/16", data source: default value)
                          Semicolon-delimited list of CIDR notation entries specifying what networks are considered "private" (default value based on RFC1918 and RFC3330)
                MCA opal: parameter "opal_signal" (current value: "6,7,8,11", data source: default value)
                          Comma-delimited list of integer signal numbers to Open MPI to attempt to intercept.  Upon receipt of the intercepted signal, Open MPI will display a stack trace and abort.  Open MPI will *not* replace signals if handlers are already installed by the time MPI_INIT is invoked.  Optionally append ":complain" to any signal number in the comma-delimited list to make Open MPI complain if it detects another signal handler (and therefore does not insert its own).
                MCA opal: parameter "opal_paffinity_alone" (current value: "0", data source: default value, synonyms: mpi_paffinity_alone)
                          If nonzero, assume that this job is the only (set of) process(es) running on each node and bind processes to processors, starting with processor ID 0
                MCA opal: parameter "opal_set_max_sys_limits" (current value: "0", data source: default value)
                          Set to non-zero to automatically set any system-imposed limits to the maximum allowed
                MCA opal: parameter "opal_event_include" (current value: "poll", data source: default value)
                          Comma-delimited list of libevent subsystems to use (poll, select -- available on your platform)
           MCA backtrace: parameter "backtrace" (current value: <none>, data source: default value)
                          Default selection set of components for the backtrace framework (<none> means use all components that can be found)
           MCA backtrace: parameter "backtrace_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the backtrace framework (0 = no verbosity)
           MCA backtrace: parameter "backtrace_execinfo_priority" (current value: "0", data source: default value)
          MCA memchecker: parameter "memchecker" (current value: <none>, data source: default value)
                          Default selection set of components for the memchecker framework (<none> means use all components that can be found)
              MCA memory: parameter "memory" (current value: <none>, data source: default value)
                          Default selection set of components for the memory framework (<none> means use all components that can be found)
              MCA memory: parameter "memory_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the memory framework (0 = no verbosity)
              MCA memory: parameter "memory_ptmalloc2_priority" (current value: "0", data source: default value)
           MCA paffinity: parameter "paffinity_base_verbose" (current value: "0", data source: default value)
                          Verbosity level of the paffinity framework
           MCA paffinity: parameter "paffinity" (current value: <none>, data source: default value)
                          Default selection set of components for the paffinity framework (<none> means use all components that can be found)
           MCA paffinity: parameter "paffinity_linux_priority" (current value: "10", data source: default value)
                          Priority of the linux paffinity component
           MCA paffinity: information "paffinity_linux_plpa_version" (value: "1.3.2", data source: default value)
                          Version of PLPA that is embedded in Open MPI
               MCA carto: parameter "carto_base_verbose" (current value: "0", data source: default value)
                          Verbosity level of the carto framework
               MCA carto: parameter "carto" (current value: <none>, data source: default value)
                          Default selection set of components for the carto framework (<none> means use all components that can be found)
               MCA carto: parameter "carto_auto_detect_priority" (current value: "11", data source: default value)
                          Priority of the auto_detect carto component
               MCA carto: parameter "carto_file_path" (current value: <none>, data source: default value)
                          The path to the cartography file
               MCA carto: parameter "carto_file_priority" (current value: "10", data source: default value)
                          Priority of the file carto component
           MCA maffinity: parameter "maffinity_base_verbose" (current value: "0", data source: default value)
                          Verbosity level of the maffinity framework
           MCA maffinity: parameter "maffinity" (current value: <none>, data source: default value)
                          Default selection set of components for the maffinity framework (<none> means use all components that can be found)
           MCA maffinity: parameter "maffinity_first_use_priority" (current value: "10", data source: default value)
                          Priority of the first_use maffinity component
               MCA timer: parameter "timer" (current value: <none>, data source: default value)
                          Default selection set of components for the timer framework (<none> means use all components that can be found)
               MCA timer: parameter "timer_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the timer framework (0 = no verbosity)
               MCA timer: parameter "timer_linux_priority" (current value: "0", data source: default value)
                 MCA dpm: parameter "dpm" (current value: <none>, data source: default value)
                          Default selection set of components for the dpm framework (<none> means use all components that can be found)
                 MCA dpm: parameter "dpm_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the dpm framework (0 = no verbosity)
              MCA pubsub: parameter "pubsub" (current value: <none>, data source: default value)
                          Default selection set of components for the pubsub framework (<none> means use all components that can be found)
              MCA pubsub: parameter "pubsub_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the pubsub framework (0 = no verbosity)
              MCA pubsub: parameter "pubsub_orte_priority" (current value: "0", data source: default value)
           MCA allocator: parameter "allocator" (current value: <none>, data source: default value)
                          Default selection set of components for the allocator framework (<none> means use all components that can be found)
           MCA allocator: parameter "allocator_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the allocator framework (0 = no verbosity)
           MCA allocator: parameter "allocator_basic_priority" (current value: "0", data source: default value)
           MCA allocator: parameter "allocator_bucket_num_buckets" (current value: "30", data source: default value)
           MCA allocator: parameter "allocator_bucket_priority" (current value: "0", data source: default value)
                MCA coll: parameter "coll" (current value: <none>, data source: default value)
                          Default selection set of components for the coll framework (<none> means use all components that can be found)
                MCA coll: parameter "coll_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the coll framework (0 = no verbosity)
                MCA coll: parameter "coll_basic_priority" (current value: "10", data source: default value)
                          Priority of the basic coll component
                MCA coll: parameter "coll_basic_crossover" (current value: "4", data source: default value)
                          Minimum number of processes in a communicator before using the logarithmic algorithms
                MCA coll: parameter "coll_hierarch_priority" (current value: "0", data source: default value)
                          Priority of the hierarchical coll component
                MCA coll: parameter "coll_hierarch_verbose" (current value: "0", data source: default value)
                          Turn verbose message of the hierarchical coll component on/off
                MCA coll: parameter "coll_hierarch_use_rdma" (current value: "0", data source: default value)
                          Switch from the send btl list used to detect hierarchies to the rdma btl list
                MCA coll: parameter "coll_hierarch_ignore_sm" (current value: "0", data source: default value)
                          Ignore sm protocol when detecting hierarchies. Required to enable the usage of protocol specific collective operations
                MCA coll: parameter "coll_hierarch_detection_alg" (current value: "2", data source: default value)
                          Used to specify the algorithm for detecting Hierarchy.To specify all levels or two levels of hierarchy
                MCA coll: parameter "coll_inter_priority" (current value: "40", data source: default value)
                          Priority of the inter coll component
                MCA coll: parameter "coll_inter_verbose" (current value: "0", data source: default value)
                          Turn verbose message of the inter coll component on/off
                MCA coll: parameter "coll_self_priority" (current value: "75", data source: default value)
                MCA coll: parameter "coll_sm_priority" (current value: "0", data source: default value)
                          Priority of the sm coll component
                MCA coll: parameter "coll_sm_control_size" (current value: "4096", data source: default value)
                          Length of the control data -- should usually be either the length of a cache line on most SMPs, or the size of a page on machines that support direct memory affinity page placement (in bytes)
                MCA coll: parameter "coll_sm_fragment_size" (current value: "8192", data source: default value)
                          Fragment size (in bytes) used for passing data through shared memory (will be rounded up to the nearest control_size size)
                MCA coll: parameter "coll_sm_comm_in_use_flags" (current value: "2", data source: default value)
                          Number of "in use" flags, used to mark a message passing area segment as currently being used or not (must be >= 2 and <= comm_num_segments)
                MCA coll: parameter "coll_sm_comm_num_segments" (current value: "128", data source: default value)
                          Number of segments in each communicator's shared memory message passing area (must be >= 2, and must be a multiple of comm_in_use_flags)
                MCA coll: parameter "coll_sm_tree_degree" (current value: "4", data source: default value)
                          Degree of the tree for tree-based operations (must be => 1 and <= min(control_size, 255))
                MCA coll: parameter "coll_sm_info_num_procs" (current value: "4", data source: default value)
                          Number of processes to use for the calculation of the shared_mem_size MCA information parameter (must be => 2)
                MCA coll: information "coll_sm_shared_mem_used_data" (value: "8413184", data source: default value)
                          Amount of shared memory used, per communicator, in the shared memory data area for info_num_procs processes (in bytes)
                MCA coll: parameter "coll_sync_priority" (current value: "50", data source: default value)
                          Priority of the sync coll component; only relevant if barrier_before or barrier_after is > 0
                MCA coll: parameter "coll_sync_barrier_before" (current value: "1000", data source: default value)
                          Do a synchronization before each Nth collective
                MCA coll: parameter "coll_sync_barrier_after" (current value: "0", data source: default value)
                          Do a synchronization after each Nth collective
                MCA coll: parameter "coll_tuned_priority" (current value: "30", data source: default value)
                          Priority of the tuned coll component
                MCA coll: parameter "coll_tuned_pre_allocate_memory_comm_size_limit" (current value: "32768", data source: default value)
                          Size of communicator were we stop pre-allocating memory for the fixed internal buffer used for message requests etc that is hung off the communicator data segment. I.e. if you have a 100'000 nodes you might not want to pre-allocate 200'000 request handle slots per communicator instance!
                MCA coll: parameter "coll_tuned_init_tree_fanout" (current value: "4", data source: default value)
                          Inital fanout used in the tree topologies for each communicator. This is only an initial guess, if a tuned collective needs a different fanout for an operation, it build it dynamically. This parameter is only for the first guess and might save a little time
                MCA coll: parameter "coll_tuned_init_chain_fanout" (current value: "4", data source: default value)
                          Inital fanout used in the chain (fanout followed by pipeline) topologies for each communicator. This is only an initial guess, if a tuned collective needs a different fanout for an operation, it build it dynamically. This parameter is only for the first guess and might save a little time
                MCA coll: parameter "coll_tuned_use_dynamic_rules" (current value: "0", data source: default value)
                          Switch used to decide if we use static (compiled/if statements) or dynamic (built at runtime) decision function rules
                  MCA io: parameter "io_base_freelist_initial_size" (current value: "16", data source: default value)
                          Initial MPI-2 IO request freelist size
                  MCA io: parameter "io_base_freelist_max_size" (current value: "64", data source: default value)
                          Max size of the MPI-2 IO request freelist
                  MCA io: parameter "io_base_freelist_increment" (current value: "16", data source: default value)
                          Increment size of the MPI-2 IO request freelist
                  MCA io: parameter "io" (current value: <none>, data source: default value)
                          Default selection set of components for the io framework (<none> means use all components that can be found)
                  MCA io: parameter "io_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the io framework (0 = no verbosity)
                  MCA io: parameter "io_romio_priority" (current value: "10", data source: default value)
                          Priority of the io romio component
                  MCA io: parameter "io_romio_delete_priority" (current value: "10", data source: default value)
                          Delete priority of the io romio component
                  MCA io: information "io_romio_version" (value: "from MPICH2 v1.0.7 with additional compilation/bug patches from romio...@mcs.anl.gov", data source: default value)
                          Version of ROMIO
                  MCA io: information "io_romio_user_configure_params" (value: <none>, data source: default value)
                          User-specified command line parameters passed to ROMIO's configure script
                  MCA io: information "io_romio_complete_configure_params" (value: " CFLAGS='-O3 -DNDEBUG -finline-functions -fno-strict-aliasing -restrict -pthread' CPPFLAGS=' ' FFLAGS='-i8' LDFLAGS=' ' --enable-shared --disable-static  --prefix=/export/openmpi/1.4.4/intel-i8 --with-mpi=open_mpi --disable-aio", data source: default value)
                          Complete set of command line parameters passed to ROMIO's configure script
               MCA mpool: parameter "mpool" (current value: <none>, data source: default value)
                          Default selection set of components for the mpool framework (<none> means use all components that can be found)
               MCA mpool: parameter "mpool_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the mpool framework (0 = no verbosity)
               MCA mpool: parameter "mpool_fake_priority" (current value: "0", data source: default value)
               MCA mpool: parameter "mpool_rdma_rcache_name" (current value: "vma", data source: default value)
                          The name of the registration cache the mpool should use
               MCA mpool: parameter "mpool_rdma_rcache_size_limit" (current value: "0", data source: default value)
                          the maximum size of registration cache in bytes. 0 is unlimited (default 0)
               MCA mpool: parameter "mpool_rdma_print_stats" (current value: "0", data source: default value)
                          print pool usage statistics at the end of the run
               MCA mpool: parameter "mpool_rdma_priority" (current value: "0", data source: default value)
               MCA mpool: parameter "mpool_sm_allocator" (current value: "bucket", data source: default value)
                          Name of allocator component to use with sm mpool
               MCA mpool: parameter "mpool_sm_min_size" (current value: "67108864", data source: default value)
                          Minimum size of the sm mpool shared memory file
               MCA mpool: parameter "mpool_sm_verbose" (current value: "0", data source: default value)
                          Enable verbose output for mpool sm component
               MCA mpool: parameter "mpool_sm_priority" (current value: "0", data source: default value)
                 MCA pml: parameter "pml_base_verbose" (current value: "0", data source: default value)
                          Verbosity level of the PML framework
                 MCA pml: parameter "pml" (current value: <none>, data source: default value)
                          Default selection set of components for the pml framework (<none> means use all components that can be found)
                 MCA pml: parameter "pml_cm_free_list_num" (current value: "4", data source: default value)
                          Initial size of request free lists
                 MCA pml: parameter "pml_cm_free_list_max" (current value: "-1", data source: default value)
                          Maximum size of request free lists
                 MCA pml: parameter "pml_cm_free_list_inc" (current value: "64", data source: default value)
                          Number of elements to add when growing request free lists
                 MCA pml: parameter "pml_cm_priority" (current value: "30", data source: default value)
                          CM PML selection priority
                 MCA pml: parameter "pml_csum_free_list_num" (current value: "4", data source: default value)
                 MCA pml: parameter "pml_csum_free_list_max" (current value: "-1", data source: default value)
                 MCA pml: parameter "pml_csum_free_list_inc" (current value: "64", data source: default value)
                 MCA pml: parameter "pml_csum_send_pipeline_depth" (current value: "3", data source: default value)
                 MCA pml: parameter "pml_csum_recv_pipeline_depth" (current value: "4", data source: default value)
                 MCA pml: parameter "pml_csum_rdma_put_retries_limit" (current value: "5", data source: default value)
                 MCA pml: parameter "pml_csum_max_rdma_per_request" (current value: "4", data source: default value)
                 MCA pml: parameter "pml_csum_max_send_per_range" (current value: "4", data source: default value)
                 MCA pml: parameter "pml_csum_unexpected_limit" (current value: "128", data source: default value)
                 MCA pml: parameter "pml_csum_allocator" (current value: "bucket", data source: default value)
                          Name of allocator component for unexpected messages
                 MCA pml: parameter "pml_csum_priority" (current value: "0", data source: default value)
                 MCA pml: parameter "pml_ob1_free_list_num" (current value: "4", data source: default value)
                 MCA pml: parameter "pml_ob1_free_list_max" (current value: "-1", data source: default value)
                 MCA pml: parameter "pml_ob1_free_list_inc" (current value: "64", data source: default value)
                 MCA pml: parameter "pml_ob1_priority" (current value: "20", data source: default value)
                 MCA pml: parameter "pml_ob1_send_pipeline_depth" (current value: "3", data source: default value)
                 MCA pml: parameter "pml_ob1_recv_pipeline_depth" (current value: "4", data source: default value)
                 MCA pml: parameter "pml_ob1_rdma_put_retries_limit" (current value: "5", data source: default value)
                 MCA pml: parameter "pml_ob1_max_rdma_per_request" (current value: "4", data source: default value)
                 MCA pml: parameter "pml_ob1_max_send_per_range" (current value: "4", data source: default value)
                 MCA pml: parameter "pml_ob1_unexpected_limit" (current value: "128", data source: default value)
                 MCA pml: parameter "pml_ob1_allocator" (current value: "bucket", data source: default value)
                          Name of allocator component for unexpected messages
                 MCA pml: parameter "pml_v_priority" (current value: "-1", data source: default value)
                 MCA pml: parameter "pml_v_output" (current value: "stderr", data source: default value)
                 MCA pml: parameter "pml_v_verbose" (current value: "0", data source: default value)
                 MCA bml: parameter "bml" (current value: <none>, data source: default value)
                          Default selection set of components for the bml framework (<none> means use all components that can be found)
                 MCA bml: parameter "bml_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the bml framework (0 = no verbosity)
                 MCA bml: parameter "bml_r2_show_unreach_errors" (current value: "1", data source: default value)
                          Show error message when procs are unreachable
                 MCA bml: parameter "bml_r2_priority" (current value: "0", data source: default value)
              MCA rcache: parameter "rcache" (current value: <none>, data source: default value)
                          Default selection set of components for the rcache framework (<none> means use all components that can be found)
              MCA rcache: parameter "rcache_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the rcache framework (0 = no verbosity)
              MCA rcache: parameter "rcache_vma_priority" (current value: "0", data source: default value)
                 MCA btl: parameter "btl_base_verbose" (current value: "0", data source: default value)
                          Verbosity level of the BTL framework
                 MCA btl: parameter "btl" (current value: <none>, data source: default value)
                          Default selection set of components for the btl framework (<none> means use all components that can be found)
                 MCA btl: parameter "btl_ofud_max_btls" (current value: "4", data source: default value)
                          Maximum number of HCAs/ports to use
                 MCA btl: parameter "btl_ofud_mpool" (current value: "rdma", data source: default value)
                          Name of the memory pool to be used
                 MCA btl: parameter "btl_ofud_ib_pkey_index" (current value: "0", data source: default value)
                          IB pkey index
                 MCA btl: parameter "btl_ofud_ib_qkey" (current value: "20119859", data source: default value)
                          IB qkey
                 MCA btl: parameter "btl_ofud_ib_service_level" (current value: "0", data source: default value)
                          IB service level
                 MCA btl: parameter "btl_ofud_ib_src_path_bits" (current value: "0", data source: default value)
                          IB source path bits
                 MCA btl: parameter "btl_ofud_sd_num" (current value: "128", data source: default value)
                          maximum send descriptors to post
                 MCA btl: parameter "btl_ofud_rd_num" (current value: "6000", data source: default value)
                          number of receive buffers
                 MCA btl: parameter "btl_ofud_min_send_size" (current value: "2048", data source: default value)
                          minimum send size
                 MCA btl: parameter "btl_ofud_max_send_size" (current value: "2048", data source: default value)
                          maximum send size
                 MCA btl: parameter "btl_ofud_exclusivity" (current value: "1024", data source: default value)
                          BTL exclusivity
                 MCA btl: parameter "btl_ofud_bandwidth" (current value: "800", data source: default value)
                          Approximate maximum bandwidth of interconnect
                 MCA btl: parameter "btl_ofud_priority" (current value: "0", data source: default value)
                 MCA btl: parameter "btl_openib_verbose" (current value: "0", data source: default value)
                          Output some verbose OpenIB BTL information (0 = no output, nonzero = output)
                 MCA btl: parameter "btl_openib_warn_no_device_params_found" (current value: "1", data source: default value, synonyms: btl_openib_warn_no_hca_params_found)
                          Warn when no device-specific parameters are found in the INI file specified by the btl_openib_device_param_files MCA parameter (0 = do not warn; any other value = warn)
                 MCA btl: parameter "btl_openib_warn_no_hca_params_found" (current value: "1", data source: default value, deprecated, synonym of: btl_openib_warn_no_device_params_found)
                          Warn when no device-specific parameters are found in the INI file specified by the btl_openib_device_param_files MCA parameter (0 = do not warn; any other value = warn)
                 MCA btl: parameter "btl_openib_warn_default_gid_prefix" (current value: "1", data source: default value)
                          Warn when there is more than one active ports and at least one of them connected to the network with only default GID prefix configured (0 = do not warn; any other value = warn)
                 MCA btl: parameter "btl_openib_warn_nonexistent_if" (current value: "1", data source: default value)
                          Warn if non-existent devices and/or ports are specified in the btl_openib_if_[in|ex]clude MCA parameters (0 = do not warn; any other value = warn)
                 MCA btl: parameter "btl_openib_want_fork_support" (current value: "-1", data source: default value)
                          Whether fork support is desired or not (negative = try to enable fork support, but continue even if it is not available, 0 = do not enable fork support, positive = try to enable fork support and fail if it is not available)
                 MCA btl: parameter "btl_openib_device_param_files" (current value: "/export/openmpi/1.4.4/intel-i8/share/openmpi/mca-btl-openib-device-params.ini", data source: default value, synonyms: btl_openib_hca_param_files)
                          Colon-delimited list of INI-style files that contain device vendor/part-specific parameters
                 MCA btl: parameter "btl_openib_hca_param_files" (current value: "/export/openmpi/1.4.4/intel-i8/share/openmpi/mca-btl-openib-device-params.ini", data source: default value, deprecated, synonym of: btl_openib_device_param_files)
                          Colon-delimited list of INI-style files that contain device vendor/part-specific parameters
                 MCA btl: parameter "btl_openib_device_type" (current value: "all", data source: default value)
                          Specify to only use IB or iWARP network adapters (infiniband = only use InfiniBand HCAs; iwarp = only use iWARP NICs; all = use any available adapters)
                 MCA btl: parameter "btl_openib_max_btls" (current value: "-1", data source: default value)
                          Maximum number of device ports to use (-1 = use all available, otherwise must be >= 1)
                 MCA btl: parameter "btl_openib_free_list_num" (current value: "8", data source: default value)
                          Intial size of free lists (must be >= 1)
                 MCA btl: parameter "btl_openib_free_list_max" (current value: "-1", data source: default value)
                          Maximum size of free lists (-1 = infinite, otherwise must be >= 0)
                 MCA btl: parameter "btl_openib_free_list_inc" (current value: "32", data source: default value)
                          Increment size of free lists (must be >= 1)
                 MCA btl: parameter "btl_openib_mpool" (current value: "rdma", data source: default value)
                          Name of the memory pool to be used (it is unlikely that you will ever want to change this
                 MCA btl: parameter "btl_openib_reg_mru_len" (current value: "16", data source: default value)
                          Length of the registration cache most recently used list (must be >= 1)
                 MCA btl: parameter "btl_openib_cq_size" (current value: "1000", data source: default value, synonyms: btl_openib_ib_cq_size)
                          Minimum size of the OpenFabrics completion queue (CQs are automatically sized based on the number of peer MPI processes; this value determines the *minimum* size of all CQs)
                 MCA btl: parameter "btl_openib_ib_cq_size" (current value: "1000", data source: default value, deprecated, synonym of: btl_openib_cq_size)
                          Minimum size of the OpenFabrics completion queue (CQs are automatically sized based on the number of peer MPI processes; this value determines the *minimum* size of all CQs)
                 MCA btl: parameter "btl_openib_max_inline_data" (current value: "-1", data source: default value, synonyms: btl_openib_ib_max_inline_data)
                          Maximum size of inline data segment (-1 = run-time probe to discover max value, otherwise must be >= 0). If not explicitly set, use max_inline_data from the INI file containing device-specific parameters
                 MCA btl: parameter "btl_openib_ib_max_inline_data" (current value: "-1", data source: default value, deprecated, synonym of: btl_openib_max_inline_data)
                          Maximum size of inline data segment (-1 = run-time probe to discover max value, otherwise must be >= 0). If not explicitly set, use max_inline_data from the INI file containing device-specific parameters
                 MCA btl: parameter "btl_openib_pkey" (current value: "0", data source: default value, synonyms: btl_openib_ib_pkey_val)
                          OpenFabrics partition key (pkey) value. Unsigned integer decimal or hex values are allowed (e.g., "3" or "0x3f") and will be masked against the maximum allowable IB paritition key value (0x7fff)
                 MCA btl: parameter "btl_openib_ib_pkey_val" (current value: "0", data source: default value, deprecated, synonym of: btl_openib_pkey)
                          OpenFabrics partition key (pkey) value. Unsigned integer decimal or hex values are allowed (e.g., "3" or "0x3f") and will be masked against the maximum allowable IB paritition key value (0x7fff)
                 MCA btl: parameter "btl_openib_psn" (current value: "0", data source: default value, synonyms: btl_openib_ib_psn)
                          OpenFabrics packet sequence starting number (must be >= 0)
                 MCA btl: parameter "btl_openib_ib_psn" (current value: "0", data source: default value, deprecated, synonym of: btl_openib_psn)
                          OpenFabrics packet sequence starting number (must be >= 0)
                 MCA btl: parameter "btl_openib_ib_qp_ous_rd_atom" (current value: "4", data source: default value)
                          InfiniBand outstanding atomic reads (must be >= 0)
                 MCA btl: parameter "btl_openib_mtu" (current value: "3", data source: default value, synonyms: btl_openib_ib_mtu)
                          OpenFabrics MTU, in bytes (if not specified in INI files).  Valid values are: 1=256 bytes, 2=512 bytes, 3=1024 bytes, 4=2048 bytes, 5=4096 bytes
                 MCA btl: parameter "btl_openib_ib_mtu" (current value: "3", data source: default value, deprecated, synonym of: btl_openib_mtu)
                          OpenFabrics MTU, in bytes (if not specified in INI files).  Valid values are: 1=256 bytes, 2=512 bytes, 3=1024 bytes, 4=2048 bytes, 5=4096 bytes
                 MCA btl: parameter "btl_openib_ib_min_rnr_timer" (current value: "25", data source: default value)
                          InfiniBand minimum "receiver not ready" timer, in seconds (must be >= 0 and <= 31)
                 MCA btl: parameter "btl_openib_ib_timeout" (current value: "20", data source: default value)
                          InfiniBand transmit timeout, plugged into formula: 4.096 microseconds * (2^btl_openib_ib_timeout)(must be >= 0 and <= 31)
                 MCA btl: parameter "btl_openib_ib_retry_count" (current value: "7", data source: default value)
                          InfiniBand transmit retry count (must be >= 0 and <= 7)
                 MCA btl: parameter "btl_openib_ib_rnr_retry" (current value: "7", data source: default value)
                          InfiniBand "receiver not ready" retry count; applies *only* to SRQ/XRC queues.  PP queues use RNR retry values of 0 because Open MPI performs software flow control to guarantee that RNRs never occur (must be >= 0 and <= 7; 7 = "infinite")
                 MCA btl: parameter "btl_openib_ib_max_rdma_dst_ops" (current value: "4", data source: default value)
                          InfiniBand maximum pending RDMA destination operations (must be >= 0)
                 MCA btl: parameter "btl_openib_ib_service_level" (current value: "0", data source: default value)
                          InfiniBand service level (must be >= 0 and <= 15)
                 MCA btl: parameter "btl_openib_use_eager_rdma" (current value: "-1", data source: default value)
                          Use RDMA for eager messages (-1 = use device default, 0 = do not use eager RDMA, 1 = use eager RDMA)
                 MCA btl: parameter "btl_openib_eager_rdma_threshold" (current value: "16", data source: default value)
                          Use RDMA for short messages after this number of messages are received from a given peer (must be >= 1)
                 MCA btl: parameter "btl_openib_max_eager_rdma" (current value: "16", data source: default value)
                          Maximum number of peers allowed to use RDMA for short messages (RDMA is used for all long messages, except if explicitly disabled, such as with the "dr" pml) (must be >= 0)
                 MCA btl: parameter "btl_openib_eager_rdma_num" (current value: "16", data source: default value)
                          Number of RDMA buffers to allocate for small messages(must be >= 1)
                 MCA btl: parameter "btl_openib_btls_per_lid" (current value: "1", data source: default value)
                          Number of BTLs to create for each InfiniBand LID (must be >= 1)
                 MCA btl: parameter "btl_openib_max_lmc" (current value: "0", data source: default value)
                          Maximum number of LIDs to use for each device port (must be >= 0, where 0 = use all available)
                 MCA btl: parameter "btl_openib_enable_apm_over_lmc" (current value: "0", data source: default value)
                          Maximum number of alterative paths for each device port (must be >= -1, where 0 = disable apm, -1 = all availible alternative paths )
                 MCA btl: parameter "btl_openib_enable_apm_over_ports" (current value: "0", data source: default value)
                          Enable alterative path migration (APM) over different ports of the same device (must be >= 0, where 0 = disable APM over ports , 1 = enable APM over ports of the same device)
                 MCA btl: parameter "btl_openib_use_async_event_thread" (current value: "1", data source: default value)
                          If nonzero, use the thread that will handle InfiniBand asyncihronous events 
                 MCA btl: parameter "btl_openib_buffer_alignment" (current value: "64", data source: default value)
                          Prefered communication buffer alignment, in bytes (must be > 0 and power of two)
                 MCA btl: parameter "btl_openib_use_message_coalescing" (current value: "1", data source: default value)
                          Use message coalescing
                 MCA btl: parameter "btl_openib_cq_poll_ratio" (current value: "100", data source: default value)
                          how often poll high priority CQ versus low priority CQ
                 MCA btl: parameter "btl_openib_eager_rdma_poll_ratio" (current value: "100", data source: default value)
                          how often poll eager RDMA channel versus CQ
                 MCA btl: parameter "btl_openib_hp_cq_poll_per_progress" (current value: "10", data source: default value)
                          max number of completion events to process for each call of BTL progress engine
                 MCA btl: information "btl_openib_have_fork_support" (value: "1", data source: default value)
                          Whether the OpenFabrics stack supports applications that invoke the "fork()" system call or not (0 = no, 1 = yes).  Note that this value does NOT indicate whether the system being run on supports "fork()" with OpenFabrics applications or not.
                 MCA btl: parameter "btl_openib_exclusivity" (current value: "1024", data source: default value)
                          BTL exclusivity (must be >= 0)
                 MCA btl: parameter "btl_openib_flags" (current value: "310", data source: default value)
                          BTL bit flags (general flags: SEND=1, PUT=2, GET=4, SEND_INPLACE=8, RDMA_MATCHED=64, HETEROGENEOUS_RDMA=256; flags only used by the "dr" PML (ignored by others): ACK=16, CHECKSUM=32, RDMA_COMPLETION=128)
                 MCA btl: parameter "btl_openib_rndv_eager_limit" (current value: "12288", data source: default value)
                          Size (in bytes) of "phase 1" fragment sent for all large messages (must be >= 0 and <= eager_limit)
                 MCA btl: parameter "btl_openib_eager_limit" (current value: "12288", data source: default value)
                          Maximum size (in bytes) of "short" messages (must be >= 1).
                 MCA btl: parameter "btl_openib_max_send_size" (current value: "65536", data source: default value)
                          Maximum size (in bytes) of a single "phase 2" fragment of a long message when using the pipeline protocol (must be >= 1)
                 MCA btl: parameter "btl_openib_rdma_pipeline_send_length" (current value: "1048576", data source: default value)
                          Length of the "phase 2" portion of a large message (in bytes) when using the pipeline protocol.  This part of the message will be split into fragments of size max_send_size and sent using send/receive semantics (must be >= 0; only relevant when the PUT flag is set)
                 MCA btl: parameter "btl_openib_rdma_pipeline_frag_size" (current value: "1048576", data source: default value)
                          Maximum size (in bytes) of a single "phase 3" fragment from a long message when using the pipeline protocol.  These fragments will be sent using RDMA semantics (must be >= 1; only relevant when the PUT flag is set)
                 MCA btl: parameter "btl_openib_min_rdma_pipeline_size" (current value: "262144", data source: default value)
                          Messages smaller than this size (in bytes) will not use the RDMA pipeline protocol.  Instead, they will be split into fragments of max_send_size and sent using send/receive semantics (must be >=0, and is automatically adjusted up to at least (eager_limit+btl_rdma_pipeline_send_length); only relevant when the PUT flag is set)
                 MCA btl: parameter "btl_openib_bandwidth" (current value: "800", data source: default value)
                          Approximate maximum bandwidth of interconnect(must be >= 1)
                 MCA btl: parameter "btl_openib_latency" (current value: "10", data source: default value)
                          Approximate latency of interconnect (must be >= 0)
                 MCA btl: parameter "btl_openib_receive_queues" (current value: "P,128,256,192,128:S,2048,256,128,32:S,12288,256,128,32:S,65536,256,128,32", data source: default value)
                          Colon-delimited, comma delimited list of receive queues: P,4096,8,6,4:P,32768,8,6,4
                 MCA btl: parameter "btl_openib_if_include" (current value: <none>, data source: default value)
                          Comma-delimited list of devices/ports to be used (e.g. "mthca0,mthca1:2"; empty value means to use all ports found).  Mutually exclusive with btl_openib_if_exclude.
                 MCA btl: parameter "btl_openib_if_exclude" (current value: <none>, data source: default value)
                          Comma-delimited list of device/ports to be excluded (empty value means to not exclude any ports).  Mutually exclusive with btl_openib_if_include.
                 MCA btl: parameter "btl_openib_ipaddr_include" (current value: <none>, data source: default value)
                          Comma-delimited list of IP Addresses to be used (e.g. "192.168.1.0/24").  Mutually exclusive with btl_openib_ipaddr_exclude.
                 MCA btl: parameter "btl_openib_ipaddr_exclude" (current value: <none>, data source: default value)
                          Comma-delimited list of IP Addresses to be excluded (e.g. "192.168.1.0/24").  Mutually exclusive with btl_openib_ipaddr_include.
                 MCA btl: parameter "btl_openib_cpc_include" (current value: <none>, data source: default value)
                          Method used to select OpenFabrics connections (valid values: oob,rdmacm)
                 MCA btl: parameter "btl_openib_cpc_exclude" (current value: <none>, data source: default value)
                          Method used to exclude OpenFabrics connections (valid values: oob,rdmacm)
                 MCA btl: parameter "btl_openib_connect_oob_priority" (current value: "50", data source: default value)
                          The selection method priority for oob
                 MCA btl: parameter "btl_openib_connect_rdmacm_priority" (current value: "30", data source: default value)
                          The selection method priority for rdma_cm
                 MCA btl: parameter "btl_openib_connect_rdmacm_port" (current value: "0", data source: default value)
                          The selection method port for rdma_cm
                 MCA btl: parameter "btl_openib_connect_rdmacm_resolve_timeout" (current value: "30000", data source: default value)
                          The timeout (in miliseconds) for address and route resolution
                 MCA btl: parameter "btl_openib_connect_rdmacm_retry_count" (current value: "20", data source: default value)
                          Maximum number of times rdmacm will retry route resolution
                 MCA btl: parameter "btl_openib_connect_rdmacm_reject_causes_connect_error" (current value: "0", data source: default value)
                          The drivers for some devices are buggy such that an RDMA REJECT action may result in a CONNECT_ERROR event instead of a REJECTED event.  Setting this MCA parameter to true tells Open MPI to treat CONNECT_ERROR events on connections where a REJECT is expected as a REJECT (default: false)
                 MCA btl: parameter "btl_openib_priority" (current value: "0", data source: default value)
                 MCA btl: parameter "btl_self_free_list_num" (current value: "0", data source: default value)
                          Number of fragments by default
                 MCA btl: parameter "btl_self_free_list_max" (current value: "-1", data source: default value)
                          Maximum number of fragments
                 MCA btl: parameter "btl_self_free_list_inc" (current value: "32", data source: default value)
                          Increment by this number of fragments
                 MCA btl: parameter "btl_self_exclusivity" (current value: "65536", data source: default value)
                          BTL exclusivity (must be >= 0)
                 MCA btl: parameter "btl_self_flags" (current value: "10", data source: default value)
                          BTL bit flags (general flags: SEND=1, PUT=2, GET=4, SEND_INPLACE=8, RDMA_MATCHED=64, HETEROGENEOUS_RDMA=256; flags only used by the "dr" PML (ignored by others): ACK=16, CHECKSUM=32, RDMA_COMPLETION=128)
                 MCA btl: parameter "btl_self_rndv_eager_limit" (current value: "131072", data source: default value)
                          Size (in bytes) of "phase 1" fragment sent for all large messages (must be >= 0 and <= eager_limit)
                 MCA btl: parameter "btl_self_eager_limit" (current value: "131072", data source: default value)
                          Maximum size (in bytes) of "short" messages (must be >= 1).
                 MCA btl: parameter "btl_self_max_send_size" (current value: "262144", data source: default value)
                          Maximum size (in bytes) of a single "phase 2" fragment of a long message when using the pipeline protocol (must be >= 1)
                 MCA btl: parameter "btl_self_rdma_pipeline_send_length" (current value: "2147483647", data source: default value)
                          Length of the "phase 2" portion of a large message (in bytes) when using the pipeline protocol.  This part of the message will be split into fragments of size max_send_size and sent using send/receive semantics (must be >= 0; only relevant when the PUT flag is set)
                 MCA btl: parameter "btl_self_rdma_pipeline_frag_size" (current value: "2147483647", data source: default value)
                          Maximum size (in bytes) of a single "phase 3" fragment from a long message when using the pipeline protocol.  These fragments will be sent using RDMA semantics (must be >= 1; only relevant when the PUT flag is set)
                 MCA btl: parameter "btl_self_min_rdma_pipeline_size" (current value: "0", data source: default value)
                          Messages smaller than this size (in bytes) will not use the RDMA pipeline protocol.  Instead, they will be split into fragments of max_send_size and sent using send/receive semantics (must be >=0, and is automatically adjusted up to at least (eager_limit+btl_rdma_pipeline_send_length); only relevant when the PUT flag is set)
                 MCA btl: parameter "btl_self_bandwidth" (current value: "100", data source: default value)
                          Approximate maximum bandwidth of interconnect(must be >= 1)
                 MCA btl: parameter "btl_self_latency" (current value: "0", data source: default value)
                          Approximate latency of interconnect (must be >= 0)
                 MCA btl: parameter "btl_self_priority" (current value: "0", data source: default value)
                 MCA btl: parameter "btl_sm_free_list_num" (current value: "8", data source: default value)
                 MCA btl: parameter "btl_sm_free_list_max" (current value: "-1", data source: default value)
                 MCA btl: parameter "btl_sm_free_list_inc" (current value: "64", data source: default value)
                 MCA btl: parameter "btl_sm_max_procs" (current value: "-1", data source: default value)
                 MCA btl: parameter "btl_sm_mpool" (current value: "sm", data source: default value)
                 MCA btl: parameter "btl_sm_fifo_size" (current value: "4096", data source: default value)
                 MCA btl: parameter "btl_sm_num_fifos" (current value: "1", data source: default value)
                 MCA btl: parameter "btl_sm_fifo_lazy_free" (current value: "120", data source: default value)
                 MCA btl: parameter "btl_sm_sm_extra_procs" (current value: "0", data source: default value)
                 MCA btl: parameter "btl_sm_exclusivity" (current value: "65535", data source: default value)
                          BTL exclusivity (must be >= 0)
                 MCA btl: parameter "btl_sm_flags" (current value: "1", data source: default value)
                          BTL bit flags (general flags: SEND=1, PUT=2, GET=4, SEND_INPLACE=8, RDMA_MATCHED=64, HETEROGENEOUS_RDMA=256; flags only used by the "dr" PML (ignored by others): ACK=16, CHECKSUM=32, RDMA_COMPLETION=128)
                 MCA btl: parameter "btl_sm_rndv_eager_limit" (current value: "4096", data source: default value)
                          Size (in bytes) of "phase 1" fragment sent for all large messages (must be >= 0 and <= eager_limit)
                 MCA btl: parameter "btl_sm_eager_limit" (current value: "4096", data source: default value)
                          Maximum size (in bytes) of "short" messages (must be >= 1).
                 MCA btl: parameter "btl_sm_max_send_size" (current value: "32768", data source: default value)
                          Maximum size (in bytes) of a single "phase 2" fragment of a long message when using the pipeline protocol (must be >= 1)
                 MCA btl: parameter "btl_sm_bandwidth" (current value: "900", data source: default value)
                          Approximate maximum bandwidth of interconnect(must be >= 1)
                 MCA btl: parameter "btl_sm_latency" (current value: "100", data source: default value)
                          Approximate latency of interconnect (must be >= 0)
                 MCA btl: parameter "btl_sm_priority" (current value: "0", data source: default value)
                 MCA btl: parameter "btl_tcp_links" (current value: "1", data source: default value)
                 MCA btl: parameter "btl_tcp_if_include" (current value: <none>, data source: default value)
                 MCA btl: parameter "btl_tcp_if_exclude" (current value: "lo", data source: default value)
                 MCA btl: parameter "btl_tcp_free_list_num" (current value: "8", data source: default value)
                 MCA btl: parameter "btl_tcp_free_list_max" (current value: "-1", data source: default value)
                 MCA btl: parameter "btl_tcp_free_list_inc" (current value: "32", data source: default value)
                 MCA btl: parameter "btl_tcp_sndbuf" (current value: "131072", data source: default value)
                 MCA btl: parameter "btl_tcp_rcvbuf" (current value: "131072", data source: default value)
                 MCA btl: parameter "btl_tcp_endpoint_cache" (current value: "30720", data source: default value)
                          The size of the internal cache for each TCP connection. This cache is used to reduce the number of syscalls, by replacing them with memcpy. Every read will read the expected data plus the amount of the endpoint_cache
                 MCA btl: parameter "btl_tcp_use_nagle" (current value: "0", data source: default value)
                          Whether to use Nagle's algorithm or not (using Nagle's algorithm may increase short message latency)
                 MCA btl: parameter "btl_tcp_port_min_v4" (current value: "1024", data source: default value)
                          The minimum port where the TCP BTL will try to bind (default 1024)
                 MCA btl: parameter "btl_tcp_port_range_v4" (current value: "64511", data source: default value)
                          The number of ports where the TCP BTL will try to bind (default 64511). This parameter together with the port min, define a range of ports where Open MPI will open sockets.
                 MCA btl: parameter "btl_tcp_port_min_v6" (current value: "1024", data source: default value)
                          The minimum port where the TCP BTL will try to bind (default 1024)
                 MCA btl: parameter "btl_tcp_port_range_v6" (current value: "64511", data source: default value)
                          The number of ports where the TCP BTL will try to bind (default 64511). This parameter together with the port min, define a range of ports where Open MPI will open sockets.
                 MCA btl: parameter "btl_tcp_exclusivity" (current value: "100", data source: default value)
                          BTL exclusivity (must be >= 0)
                 MCA btl: parameter "btl_tcp_flags" (current value: "314", data source: default value)
                          BTL bit flags (general flags: SEND=1, PUT=2, GET=4, SEND_INPLACE=8, RDMA_MATCHED=64, HETEROGENEOUS_RDMA=256; flags only used by the "dr" PML (ignored by others): ACK=16, CHECKSUM=32, RDMA_COMPLETION=128)
                 MCA btl: parameter "btl_tcp_rndv_eager_limit" (current value: "65536", data source: default value)
                          Size (in bytes) of "phase 1" fragment sent for all large messages (must be >= 0 and <= eager_limit)
                 MCA btl: parameter "btl_tcp_eager_limit" (current value: "65536", data source: default value)
                          Maximum size (in bytes) of "short" messages (must be >= 1).
                 MCA btl: parameter "btl_tcp_max_send_size" (current value: "131072", data source: default value)
                          Maximum size (in bytes) of a single "phase 2" fragment of a long message when using the pipeline protocol (must be >= 1)
                 MCA btl: parameter "btl_tcp_rdma_pipeline_send_length" (current value: "131072", data source: default value)
                          Length of the "phase 2" portion of a large message (in bytes) when using the pipeline protocol.  This part of the message will be split into fragments of size max_send_size and sent using send/receive semantics (must be >= 0; only relevant when the PUT flag is set)
                 MCA btl: parameter "btl_tcp_rdma_pipeline_frag_size" (current value: "2147483647", data source: default value)
                          Maximum size (in bytes) of a single "phase 3" fragment from a long message when using the pipeline protocol.  These fragments will be sent using RDMA semantics (must be >= 1; only relevant when the PUT flag is set)
                 MCA btl: parameter "btl_tcp_min_rdma_pipeline_size" (current value: "0", data source: default value)
                          Messages smaller than this size (in bytes) will not use the RDMA pipeline protocol.  Instead, they will be split into fragments of max_send_size and sent using send/receive semantics (must be >=0, and is automatically adjusted up to at least (eager_limit+btl_rdma_pipeline_send_length); only relevant when the PUT flag is set)
                 MCA btl: parameter "btl_tcp_bandwidth" (current value: "100", data source: default value)
                          Approximate maximum bandwidth of interconnect(must be >= 1)
                 MCA btl: parameter "btl_tcp_latency" (current value: "100", data source: default value)
                          Approximate latency of interconnect (must be >= 0)
                 MCA btl: parameter "btl_tcp_disable_family" (current value: "0", data source: default value)
                 MCA btl: parameter "btl_tcp_priority" (current value: "0", data source: default value)
                 MCA btl: parameter "btl_base_include" (current value: <none>, data source: default value)
                 MCA btl: parameter "btl_base_exclude" (current value: <none>, data source: default value)
                 MCA btl: parameter "btl_base_warn_component_unused" (current value: "1", data source: default value)
                          This parameter is used to turn on warning messages when certain NICs are not used
                 MCA mtl: parameter "mtl" (current value: <none>, data source: default value)
                          Default selection set of components for the mtl framework (<none> means use all components that can be found)
                 MCA mtl: parameter "mtl_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the mtl framework (0 = no verbosity)
                MCA topo: parameter "topo" (current value: <none>, data source: default value)
                          Default selection set of components for the topo framework (<none> means use all components that can be found)
                MCA topo: parameter "topo_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the topo framework (0 = no verbosity)
                MCA topo: parameter "topo_unity_priority" (current value: "0", data source: default value)
                 MCA osc: parameter "osc" (current value: <none>, data source: default value)
                          Default selection set of components for the osc framework (<none> means use all components that can be found)
                 MCA osc: parameter "osc_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the osc framework (0 = no verbosity)
                 MCA osc: parameter "osc_pt2pt_no_locks" (current value: "0", data source: default value)
                          Enable optimizations available only if MPI_LOCK is not used.
                 MCA osc: parameter "osc_pt2pt_eager_limit" (current value: "16384", data source: default value)
                          Max size of eagerly sent data
                 MCA osc: parameter "osc_pt2pt_priority" (current value: "0", data source: default value)
                 MCA osc: parameter "osc_rdma_eager_send" (current value: "1", data source: default value)
                          Attempt to start data movement during communication call, instead of at synchrnoization time.  Info key of same name overrides this value.
                 MCA osc: parameter "osc_rdma_use_buffers" (current value: "0", data source: default value)
                          Coalesce messages during an epoch to reduce network utilization.  Info key of same name overrides this value.
                 MCA osc: parameter "osc_rdma_use_rdma" (current value: "0", data source: default value)
                          Use real RDMA operations to transfer data.  Info key of same name overrides this value.
                 MCA osc: parameter "osc_rdma_rdma_completion_wait" (current value: "1", data source: default value)
                          Wait for all completion of rdma events before sending acknowledgment.  Info key of same name overrides this value.
                 MCA osc: parameter "osc_rdma_no_locks" (current value: "0", data source: default value)
                          Enable optimizations available only if MPI_LOCK is not used.  Info key of same name overrides this value.
                 MCA osc: parameter "osc_rdma_priority" (current value: "0", data source: default value)
                 MCA iof: parameter "iof" (current value: <none>, data source: default value)
                          Default selection set of components for the iof framework (<none> means use all components that can be found)
                 MCA iof: parameter "iof_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the iof framework (0 = no verbosity)
                 MCA iof: parameter "iof_hnp_priority" (current value: "0", data source: default value)
                 MCA iof: parameter "iof_orted_priority" (current value: "0", data source: default value)
                 MCA iof: parameter "iof_tool_priority" (current value: "0", data source: default value)
                 MCA oob: parameter "oob" (current value: <none>, data source: default value)
                          Default selection set of components for the oob framework (<none> means use all components that can be found)
                 MCA oob: parameter "oob_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the oob framework (0 = no verbosity)
                 MCA oob: parameter "oob_tcp_verbose" (current value: "0", data source: default value)
                          Verbose level for the OOB tcp component
                 MCA oob: parameter "oob_tcp_peer_limit" (current value: "-1", data source: default value)
                          Maximum number of peer connections to simultaneously maintain (-1 = infinite)
                 MCA oob: parameter "oob_tcp_peer_retries" (current value: "60", data source: default value)
                          Number of times to try shutting down a connection before giving up
                 MCA oob: parameter "oob_tcp_debug" (current value: "0", data source: default value)
                          Enable (1) / disable (0) debugging output for this component
                 MCA oob: parameter "oob_tcp_sndbuf" (current value: "131072", data source: default value)
                          TCP socket send buffering size (in bytes)
                 MCA oob: parameter "oob_tcp_rcvbuf" (current value: "131072", data source: default value)
                          TCP socket receive buffering size (in bytes)
                 MCA oob: parameter "oob_tcp_if_include" (current value: <none>, data source: default value)
                          Comma-delimited list of TCP interfaces to use
                 MCA oob: parameter "oob_tcp_if_exclude" (current value: <none>, data source: default value)
                          Comma-delimited list of TCP interfaces to exclude
                 MCA oob: parameter "oob_tcp_connect_sleep" (current value: "1", data source: default value)
                          Enable (1) / disable (0) random sleep for connection wireup.
                 MCA oob: parameter "oob_tcp_listen_mode" (current value: "event", data source: default value)
                          Mode for HNP to accept incoming connections: event, listen_thread.
                 MCA oob: parameter "oob_tcp_listen_thread_max_queue" (current value: "10", data source: default value)
                          High water mark for queued accepted socket list size.  Used only when listen_mode is listen_thread.
                 MCA oob: parameter "oob_tcp_listen_thread_wait_time" (current value: "10", data source: default value)
                          Time in milliseconds to wait before actively checking for new connections when listen_mode is listen_thread.
                 MCA oob: parameter "oob_tcp_port_min_v4" (current value: "0", data source: default value)
                          Starting port allowed (IPv4)
                 MCA oob: parameter "oob_tcp_port_range_v4" (current value: "65535", data source: default value)
                          Range of allowed ports (IPv4)
                 MCA oob: parameter "oob_tcp_disable_family" (current value: "0", data source: default value)
                          Disable IPv4 (4) or IPv6 (6)
                 MCA oob: parameter "oob_tcp_port_min_v6" (current value: "0", data source: default value)
                          Starting port allowed (IPv6)
                 MCA oob: parameter "oob_tcp_port_range_v6" (current value: "65535", data source: default value)
                          Range of allowed ports (IPv6)
                 MCA oob: parameter "oob_tcp_priority" (current value: "0", data source: default value)
                MCA odls: parameter "odls_base_sigkill_timeout" (current value: "1", data source: default value)
                          Time to wait for a process to die after issuing a kill signal to it
                MCA odls: parameter "odls_base_report_bindings" (current value: "0", data source: default value)
                          Report process bindings [default: no]
                MCA odls: parameter "odls" (current value: <none>, data source: default value)
                          Default selection set of components for the odls framework (<none> means use all components that can be found)
                MCA odls: parameter "odls_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the odls framework (0 = no verbosity)
                MCA odls: parameter "odls_default_priority" (current value: "0", data source: default value)
                 MCA ras: parameter "ras_base_display_alloc" (current value: "0", data source: default value)
                          Whether to display the allocation after it is determined
                 MCA ras: parameter "ras_base_display_devel_alloc" (current value: "0", data source: default value)
                          Whether to display a developer-detail allocation after it is determined
                 MCA ras: parameter "ras" (current value: <none>, data source: default value)
                          Default selection set of components for the ras framework (<none> means use all components that can be found)
                 MCA ras: parameter "ras_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the ras framework (0 = no verbosity)
                 MCA ras: parameter "ras_slurm_priority" (current value: "75", data source: default value)
                          Priority of the slurm ras component
               MCA rmaps: parameter "rmaps_rank_file_path" (current value: <none>, data source: default value, synonym of: orte_rankfile)
                          Name of the rankfile to be used for mapping processes (relative or absolute path)
               MCA rmaps: parameter "rmaps_base_schedule_policy" (current value: "slot", data source: default value)
                          Scheduling Policy for RMAPS. [slot (alias:core) | socket | board | node]
               MCA rmaps: parameter "rmaps_base_pernode" (current value: "0", data source: default value)
                          Launch one ppn as directed
               MCA rmaps: parameter "rmaps_base_n_pernode" (current value: "-1", data source: default value)
                          Launch n procs/node
               MCA rmaps: parameter "rmaps_base_n_perboard" (current value: "-1", data source: default value)
                          Launch n procs/board
               MCA rmaps: parameter "rmaps_base_n_persocket" (current value: "-1", data source: default value)
                          Launch n procs/socket
               MCA rmaps: parameter "rmaps_base_loadbalance" (current value: "0", data source: default value)
                          Balance total number of procs across all allocated nodes
               MCA rmaps: parameter "rmaps_base_cpus_per_proc" (current value: "1", data source: default value, synonyms: rmaps_base_cpus_per_rank)
                          Number of cpus to use for each rank [1-2**15 (default=1)]
               MCA rmaps: parameter "rmaps_base_cpus_per_rank" (current value: "1", data source: default value, synonym of: rmaps_base_cpus_per_proc)
                          Number of cpus to use for each rank [1-2**15 (default=1)]
               MCA rmaps: parameter "rmaps_base_stride" (current value: "1", data source: default value)
                          When binding multiple cores to a rank, the step size to use between cores [1-2**15 (default: 1)]
               MCA rmaps: parameter "rmaps_base_slot_list" (current value: <none>, data source: default value)
                          List of processor IDs to bind MPI processes to (e.g., used in conjunction with rank files) [default=NULL]
               MCA rmaps: parameter "rmaps_base_no_schedule_local" (current value: "0", data source: default value)
                          If false, allow scheduling MPI applications on the same node as mpirun (default).  If true, do not schedule any MPI applications on the same node as mpirun
               MCA rmaps: parameter "rmaps_base_no_oversubscribe" (current value: "0", data source: default value)
                          If true, then do not allow oversubscription of nodes - mpirun will return an error if there aren't enough nodes to launch all processes without oversubscribing
               MCA rmaps: parameter "rmaps_base_display_map" (current value: "0", data source: default value)
                          Whether to display the process map after it is computed
               MCA rmaps: parameter "rmaps_base_display_devel_map" (current value: "0", data source: default value)
                          Whether to display a developer-detail process map after it is computed
               MCA rmaps: parameter "rmaps" (current value: <none>, data source: default value)
                          Default selection set of components for the rmaps framework (<none> means use all components that can be found)
               MCA rmaps: parameter "rmaps_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the rmaps framework (0 = no verbosity)
               MCA rmaps: parameter "rmaps_load_balance_priority" (current value: "0", data source: default value)
               MCA rmaps: parameter "rmaps_rank_file_priority" (current value: "0", data source: default value)
               MCA rmaps: parameter "rmaps_round_robin_priority" (current value: "0", data source: default value)
               MCA rmaps: parameter "rmaps_seq_priority" (current value: "0", data source: default value)
                 MCA rml: parameter "rml_wrapper" (current value: <none>, data source: default value)
                          Use a Wrapper component around the selected RML component
                 MCA rml: parameter "rml" (current value: <none>, data source: default value)
                          Default selection set of components for the rml framework (<none> means use all components that can be found)
                 MCA rml: parameter "rml_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the rml framework (0 = no verbosity)
                 MCA rml: parameter "rml_oob_priority" (current value: "0", data source: default value)
              MCA routed: parameter "routed" (current value: <none>, data source: default value)
                          Default selection set of components for the routed framework (<none> means use all components that can be found)
              MCA routed: parameter "routed_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the routed framework (0 = no verbosity)
              MCA routed: parameter "routed_binomial_priority" (current value: "0", data source: default value)
              MCA routed: parameter "routed_direct_priority" (current value: "0", data source: default value)
              MCA routed: parameter "routed_linear_priority" (current value: "0", data source: default value)
                 MCA plm: parameter "plm" (current value: <none>, data source: default value)
                          Default selection set of components for the plm framework (<none> means use all components that can be found)
                 MCA plm: parameter "plm_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the plm framework (0 = no verbosity)
                 MCA plm: parameter "plm_rsh_num_concurrent" (current value: "128", data source: default value)
                          How many plm_rsh_agent instances to invoke concurrently (must be > 0)
                 MCA plm: parameter "plm_rsh_force_rsh" (current value: "0", data source: default value)
                          Force the launcher to always use rsh
                 MCA plm: parameter "plm_rsh_disable_qrsh" (current value: "0", data source: default value)
                          Disable the launcher to use qrsh when under the SGE parallel environment
                 MCA plm: parameter "plm_rsh_daemonize_qrsh" (current value: "0", data source: default value)
                          Daemonize the orted under the SGE parallel environment
                 MCA plm: parameter "plm_rsh_priority" (current value: "10", data source: default value)
                          Priority of the rsh plm component
                 MCA plm: parameter "plm_rsh_delay" (current value: "1", data source: default value)
                          Delay (in seconds) between invocations of the remote agent, but only used when the "debug" MCA parameter is true, or the top-level MCA debugging is enabled (otherwise this value is ignored)
                 MCA plm: parameter "plm_rsh_assume_same_shell" (current value: "1", data source: default value)
                          If set to 1, assume that the shell on the remote node is the same as the shell on the local node.  Otherwise, probe for what the remote shell.
                 MCA plm: parameter "plm_rsh_agent" (current value: "ssh : rsh", data source: default value, synonyms: pls_rsh_agent)
                          The command used to launch executables on remote nodes (typically either "ssh" or "rsh")
                 MCA plm: parameter "plm_rsh_tree_spawn" (current value: "0", data source: default value)
                          If set to 1, launch via a tree-based topology
                 MCA plm: parameter "plm_slurm_args" (current value: <none>, data source: default value)
                          Custom arguments to srun
                 MCA plm: parameter "plm_slurm_priority" (current value: "0", data source: default value)
               MCA filem: parameter "filem" (current value: <none>, data source: default value)
                          Which Filem component to use (empty = auto-select)
               MCA filem: parameter "filem_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the filem framework (0 = no verbosity)
               MCA filem: parameter "filem_rsh_priority" (current value: "20", data source: default value)
                          Priority of the FILEM rsh component
               MCA filem: parameter "filem_rsh_verbose" (current value: "0", data source: default value)
                          Verbose level for the FILEM rsh component
               MCA filem: parameter "filem_rsh_rcp" (current value: "scp", data source: default value)
                          The rsh cp command for the FILEM rsh component
               MCA filem: parameter "filem_rsh_rsh" (current value: "ssh", data source: default value)
                          The remote shell command for the FILEM rsh component
               MCA filem: parameter "filem_rsh_max_incomming" (current value: "10", data source: default value)
                          Maximum number of incomming connections
               MCA filem: parameter "filem_rsh_max_outgoing" (current value: "10", data source: default value)
                          Maximum number of out going connections (Currently not used)
              MCA errmgr: parameter "errmgr" (current value: <none>, data source: default value)
                          Default selection set of components for the errmgr framework (<none> means use all components that can be found)
              MCA errmgr: parameter "errmgr_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the errmgr framework (0 = no verbosity)
              MCA errmgr: parameter "errmgr_default_priority" (current value: "0", data source: default value)
                 MCA ess: parameter "ess" (current value: <none>, data source: default value)
                          Default selection set of components for the ess framework (<none> means use all components that can be found)
                 MCA ess: parameter "ess_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the ess framework (0 = no verbosity)
                 MCA ess: parameter "ess_env_priority" (current value: "0", data source: default value)
                 MCA ess: parameter "ess_hnp_priority" (current value: "0", data source: default value)
                 MCA ess: parameter "ess_singleton_priority" (current value: "0", data source: default value)
                 MCA ess: parameter "ess_slurm_priority" (current value: "0", data source: default value)
                 MCA ess: parameter "ess_tool_priority" (current value: "0", data source: default value)
             MCA grpcomm: parameter "grpcomm" (current value: <none>, data source: default value)
                          Default selection set of components for the grpcomm framework (<none> means use all components that can be found)
             MCA grpcomm: parameter "grpcomm_base_verbose" (current value: "0", data source: default value)
                          Verbosity level for the grpcomm framework (0 = no verbosity)
             MCA grpcomm: parameter "grpcomm_bad_priority" (current value: "0", data source: default value)
             MCA grpcomm: parameter "grpcomm_basic_priority" (current value: "0", data source: default value)



Tuomas

Radovan Bast

unread,
Feb 25, 2013, 7:37:43 AM2/25/13
to dirac...@googlegroups.com
about the --mpi=1 crash.
this got fixed in 12.3:
http://diracprogram.org/doc/release-12/patches/CHANGELOG.html
you can fetch this update (and later patches) with:
$ ./update/update.py

why the itype test fails is a mystery to me. i will think about it a bit ...

radovan

Radovan Bast

unread,
Feb 25, 2013, 7:48:37 AM2/25/13
to dirac...@googlegroups.com
> Here's the test result for the itype compability:
>
> [loytyntu@taygeta DIRAC-12.2-Source_64]$ pwd
> /home/loytyntu/bin/DIRAC-12.2-Source_64
> [loytyntu@taygeta DIRAC-12.2-Source_64]$ mpif90
> cmake/parallel-environment/test-MPI-itype-compatibility.F90
> [loytyntu@taygeta DIRAC-12.2-Source_64]$
>
> So I don't get any kind of error message. As a result of the compilation I
> get a new executable a.out.

hi Tuomas,
please send me the following file:
./build/CMakeFiles/CMakeError.log
we can do this off the list ...
radovan

Tuomas Löytynoja

unread,
Feb 25, 2013, 8:30:33 AM2/25/13
to dirac...@googlegroups.com
Hello Radovan,

CMakeError.log has the following information:


Performing Fortran SOURCE FILE Test MPI_ITYPE_MATCHES failed with the following output:
Change Dir: /home/loytyntu/bin/DIRAC-12.2-Source_64/build/CMakeFiles/CMakeTmp

Run Build Command:/usr/bin/gmake "cmTryCompileExec348996652/fast"
/usr/bin/gmake -f CMakeFiles/cmTryCompileExec348996652.dir/build.make CMakeFiles/cmTryCompileExec348996652.dir/build
gmake[1]: Entering directory `/home/loytyntu/bin/DIRAC-12.2-Source_64/build/CMakeFiles/CMakeTmp'
/cvmfs/fgi.csc.fi/devel/sl6/cmake/2.8.9/bin/cmake -E cmake_progress_report /home/loytyntu/bin/DIRAC-12.2-Source_64/build/CMakeFiles/CMakeTmp/CMakeFiles 1
Building Fortran object CMakeFiles/cmTryCompileExec348996652.dir/src.f90.o
/export/openmpi/1.4.4/intel-i8/bin/mpif90   -w -assume byterecl -DVAR_IFORT -g -traceback -i8   -DMPI_ITYPE_MATCHES   -c /home/loytyntu/bin/DIRAC-12.2-Source_64/build/CMakeFiles/CMakeTmp/src.f90 -o CMakeFiles/cmTryCompileExec348996652.dir/src.f90.o
/home/loytyntu/bin/DIRAC-12.2-Source_64/build/CMakeFiles/CMakeTmp/src.f90(10): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_COMM_RANK]
      call mpi_comm_rank(mpi_comm_world, irank, ierr)
-----------^
compilation aborted for /home/loytyntu/bin/DIRAC-12.2-Source_64/build/CMakeFiles/CMakeTmp/src.f90 (code 1)
gmake[1]: Leaving directory `/home/loytyntu/bin/DIRAC-12.2-Source_64/build/CMakeFiles/CMakeTmp'
gmake[1]: *** [CMakeFiles/cmTryCompileExec348996652.dir/src.f90.o] Error 1
gmake: *** [cmTryCompileExec348996652/fast] Error 2

Source file was:
program raboof
!  this program won't compile if integer types don't match
   implicit none
contains
   function get_my_rank()
      use mpi
      integer :: get_my_rank
      integer :: irank
      integer :: ierr
      call mpi_comm_rank(mpi_comm_world, irank, ierr)
      get_my_rank = irank
   end function
end program


Tuomas

Radovan Bast

unread,
Feb 25, 2013, 8:45:46 AM2/25/13
to dirac...@googlegroups.com
aha. i forgot about the -i8.
so what fails is this:
$ /export/openmpi/1.4.4/intel-i8/bin/mpif90 -i8
cmake/parallel-environment/test-MPI-itype-compatibility.F90

the error indicates that the MPI module (file mpi.mod) does not match
integer type-wise.
this could be some strange conflict perhaps with some existing MPI module.
how can we verify this Stefan?

Tuomas can you have a look in "printenv" whether you don't have some
other (Open)MPI in there?
(do not post the result printenv to this list without verifying that
there is nothing sensitive
in there, this is a world-readable list)

best greetings,
radovan

Tuomas Löytynoja

unread,
Feb 25, 2013, 9:14:39 AM2/25/13
to dirac...@googlegroups.com
There seems not to be any other MPI modules listed on the output of the command printenv. However I do know that there is also 32-bit version of the MPI module installed in the path  /export/openmpi/1.4.4/intel/, but it should not be loaded during installation. I will send you Radovan the whole output of the printenv.



Tuomas

Stefan Knecht

unread,
Feb 25, 2013, 1:19:38 PM2/25/13
to dirac...@googlegroups.com
hi Tuomas,

there are two (separate) issues here.
1. as Radovan pointed out please update to Dirac patch version 12.5. this will solve your problem with the failing tests including KRCI with ./runtest --mpi=1
as you can see from the output on the top, even though you ask for MPI with 1 process the original Dirac12 version would start a sequential run. this was solved in Dirac12 patch version 12.3
2. the mismatch of your 64-bit integer MPI library and our test. first, your MPI lib is indeed compiled with 64-bit integers:
from your output of
$ ompi_info -a


       Fort integer size: 8
       Fort logical size: 8

what i could thus imagine is that in your $LD_LIBRARY_PATH you have a reference to your 32-bit integer MPI lib path which comes before the reference to your 64-bit integer MPI lib path although you say this should not be the case.
the mpi.mod which is used by "use mpi"  in the .F90 code sits in the /lib directory of your MPI library.
you can send me off-line your LD_LIBRARY_PATH:
$ echo $LD_LIBRARY_PATH

with best regards,

stefan

Tuomas Löytynoja

unread,
Feb 26, 2013, 3:46:17 AM2/26/13
to dirac...@googlegroups.com
Hello,

After reading Stefan's last message I did the following:

1. I went to the path /export/openmpi/1.4.4/intel-i8/lib and copied file mpi.mod to the folder /home/loytyntu/bin/DIRAC-12.2-Source_64/cmake/parallel-environment
2. I went to that folder changed file name mpi.mod to test.mod
3. I changed 'use mpi' to 'use test' in file test-MPI-itype-compatibility.F90
4. I ran command 'mpif90 -i8 test-MPI-itype-compatibility.F90'

The result was still:

test-MPI-itype-compatibility.F90(10): error #6285: There is no matching specific subroutine for this generic subroutine call.   [M  _COMM_RANK]
      call mpi_comm_rank(mpi_comm_world, irank, ierr)
-----------^
compilation aborted for test-MPI-itype-compatibility.F90 (code 1)


Does this help? Is the file mpi.mod something else that it should be for some reason?

I also tried to install Dirac 12.5, but then I couldn't run any tests without sending them separately to the batch job system. I tested two cases, atomic_start and krci_energy. The first one passed and the second crashed and the error was the usual:

 **** dirac-executable stderr console output : **** 
*** The MPI_Type_f2c() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[ta15:28601] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!

directory: /home/loytyntu/dirac/test2/krci_energy
   inputs: F.mol  &  f.inp



Tuomas

Stefan Knecht

unread,
Feb 26, 2013, 3:57:46 AM2/26/13
to dirac...@googlegroups.com
hi,


On 26/02/13 09.46, Tuomas Löytynoja wrote:
Hello,

After reading Stefan's last message I did the following:

1. I went to the path /export/openmpi/1.4.4/intel-i8/lib and copied file mpi.mod to the folder /home/loytyntu/bin/DIRAC-12.2-Source_64/cmake/parallel-environment
2. I went to that folder changed file name mpi.mod to test.mod
3. I changed 'use mpi' to 'use test' in file test-MPI-itype-compatibility.F90
4. I ran command 'mpif90 -i8 test-MPI-itype-compatibility.F90'

The result was still:

test-MPI-itype-compatibility.F90(10): error #6285: There is no matching specific subroutine for this generic subroutine call.   [M  _COMM_RANK]
      call mpi_comm_rank(mpi_comm_world, irank, ierr)
-----------^
compilation aborted for test-MPI-itype-compatibility.F90 (code 1)


Does this help? Is the file mpi.mod something else that it should be for some reason?

I also tried to install Dirac 12.5, but then I couldn't run any tests without sending them separately to the batch job system. I tested two cases, atomic_start and krci_energy. The first one passed and the second crashed and the error was the usual:

 **** dirac-executable stderr console output : **** 
*** The MPI_Type_f2c() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.
[ta15:28601] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!

directory: /home/loytyntu/dirac/test2/krci_energy
   inputs: F.mol  &  f.inp

that's weird. your testjob still seems to run in sequential by default (check the top of the output where the master prints the memory consumption. if it says "serial" it still goes wrong and we should check our script "pam".
it's powerful but unfortunately sometimes full of pitfalls... ;)
 
what happens if you run with --mpi=2?

with best regards,

stefan

Tuomas Löytynoja

unread,
Feb 26, 2013, 5:00:12 AM2/26/13
to dirac...@googlegroups.com
Hi,

It seems that I forgot the '--mpi' parameter from the Dirac 12.5 batch job system test. When I added '--mpi=4', neither atomic_start nor krci_energy passed. Error messages for the first one were



 **** dirac-executable stderr console output : **** 
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source             
dirac.x            0000000000647291  interface_to_mpi_         546  interface_to_mpi.F90
dirac.x            000000000048F337  mpixinit_                  94  mpi_framework.F90

Stack trace terminated abnormally.
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source             
dirac.x            0000000000647291  interface_to_mpi_         546  interface_to_mpi.F90
dirac.x            000000000048F337  mpixinit_                  94  mpi_framework.F90

Stack trace terminated abnormally.
--------------------------------------------------------------------------
mpirun has exited due to process rank 2 with PID 21809 on
node ta17 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

directory: /home/loytyntu/dirac/test2/atomic_start
   inputs: H.mol  &  H.inp
Traceback (most recent call last):
  File "/home/loytyntu/bin/DIRAC-12.5-Source_64/build/pam", line 1902, in <module>
    sys.exit(main(*sys.argv))
  File "/home/loytyntu/bin/DIRAC-12.5-Source_64/build/pam", line 1644, in main
    dirac_run.perform()
  File "/home/loytyntu/bin/DIRAC-12.5-Source_64/build/pam", line 1636, in perform
    self.pam_variables.save_files()
  File "/home/loytyntu/bin/DIRAC-12.5-Source_64/build/pam", line 1485, in save_files
    save_single_file('DFCOEF')
  File "/home/loytyntu/bin/DIRAC-12.5-Source_64/build/pam", line 1468, in save_single_file
    print_error_message('%s not found in the scratch directory.' % file_name)
NameError: global name 'file_name' is not defined


And for the second one:

DIRAC pam run in /home/loytyntu/dirac/test2/krci_energy

  ** notice ** integer kinds do not match: dirac --> kind = 8 MPI library --> kind =  4
  ** interface to 32-bit integer MPI enabled **


 ====  below this line is the stderr stream  ====
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source             
dirac.x            0000000000647291  interface_to_mpi_         546  interface_to_mpi.F90
dirac.x            000000000048F337  mpixinit_                  94  mpi_framework.F90

Stack trace terminated abnormally.
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source             
dirac.x            0000000000647291  interface_to_mpi_         546  interface_to_mpi.F90
dirac.x            000000000048F337  mpixinit_                  94  mpi_framework.F90

Stack trace terminated abnormally.
--------------------------------------------------------------------------
mpirun has exited due to process rank 2 with PID 21605 on
node ta17 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source             
dirac.x            0000000000647291  interface_to_mpi_         546  interface_to_mpi.F90
dirac.x            000000000048F337  mpixinit_                  94  mpi_framework.F90

Stack trace terminated abnormally.


I can send you the full run script and input/output files if you want to take a look at them. There could be something simple wrong there.



Tuomas

Stefan Knecht

unread,
Feb 26, 2013, 5:36:40 AM2/26/13
to dirac...@googlegroups.com
hi Tuomas,

that looks like you may have a general MPI problem.
can you compile and run a simple hello_world.F90 with mpif90 -i8?

program hello_world

 
  use mpi
  implicit none

  integer numtasks, rank, ierr, rc, len, i
  character*(MPI_MAX_PROCESSOR_NAME) name

  call MPI_INIT(ierr)

  if (ierr /= MPI_SUCCESS) then
     print *,'Error starting MPI program. Terminating.'
     call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
  end if

  call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)

  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)

  call MPI_GET_PROCESSOR_NAME(name, len, ierr)
  if (ierr /= MPI_SUCCESS) then
     print *,'Error getting processor name. Terminating.'
     call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
  end if

  print *, 'Number of tasks=',numtasks,' My rank=',rank,' My name=', trim(name)

  call MPI_FINALIZE(ierr)

end program hello_world

$ mpif90 -i8 hello_world.F90
$ mpirun -np 4 ./a.out

?

with best regards,

stefan
--
You received this message because you are subscribed to the Google Groups "dirac-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dirac-users...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Tuomas Löytynoja

unread,
Feb 26, 2013, 5:44:13 AM2/26/13
to dirac...@googlegroups.com
Hi,

It didn't compile, here's the result:


[loytyntu@taygeta ~]$ mpif90 -i8 hello_world.F90
hello_world.F90(10): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_INIT]
  call MPI_INIT(ierr)
-------^
hello_world.F90(14): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_ABORT]
     call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
----------^
hello_world.F90(17): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_COMM_SIZE]
  call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)
-------^
hello_world.F90(19): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_COMM_RANK]
  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
-------^
hello_world.F90(21): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_GET_PROCESSOR_NAME]
  call MPI_GET_PROCESSOR_NAME(name, len, ierr)
-------^
hello_world.F90(24): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_ABORT]
     call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
----------^
hello_world.F90(29): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_FINALIZE]
  call MPI_FINALIZE(ierr)
-------^
compilation aborted for hello_world.F90 (code 1)



Regards
Tuomas

Stefan Knecht

unread,
Feb 26, 2013, 6:19:19 AM2/26/13
to dirac...@googlegroups.com
hi,

that probably means that something is fishy with your 64-bit integer MPI installation. could you please consult your local
sysadmin? maybe she/he could help you. if this is fixed i am sure Dirac will also work properly. :)

with best regards,

stefan
--
You received this message because you are subscribed to the Google Groups "dirac-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dirac-users...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Ossama Kullie

unread,
Feb 26, 2013, 6:23:47 AM2/26/13
to dirac...@googlegroups.com
Hi Stefan,

I tried your subroutine and get  a similar   error  to that of Toumas :
------------------------------------- here

 [okullie@hpc-f01 ~]$ mpif90 -i8 hello_world.F90
hello_world.F90(8): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_INIT]
      call MPI_INIT(ierr)
-----------^
hello_world.F90(12): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_ABORT]
       call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
------------^
hello_world.F90(15): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_COMM_SIZE]
      call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)
-----------^
hello_world.F90(17): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_COMM_RANK]
      call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
-----------^
hello_world.F90(19): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_GET_PROCESSOR_NAME]
      call MPI_GET_PROCESSOR_NAME(name, len, ierr)
-----------^
hello_world.F90(22): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_ABORT]
      call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
-----------^
hello_world.F90(27): error #6285: There is no matching specific subroutine for this generic subroutine call.   [MPI_FINALIZE]
      call MPI_FINALIZE(ierr)
-----------^

compilation aborted for hello_world.F90 (code 1)



Stefan Knecht

unread,
Feb 26, 2013, 7:07:12 AM2/26/13
to dirac...@googlegroups.com
hi Ossama,

that just means that your mpi.mod file was not compiled with 64-bit integer.

with best regards,

stefan

Tuomas Löytynoja

unread,
Feb 26, 2013, 8:31:29 AM2/26/13
to dirac...@googlegroups.com
Dear all,

This case seems to be solved! I heard that the problem was that the OpenMPI installation wasn't clean, but there were some leftovers from the previous 32-bit compilations. The key was to do the installation of the OpenMPI by starting cleanly from the .tar-file. I guess in these kind of situations 'ompi_info -a | grep 'Fort integer size'' cannot tell the whole truth. 

After this I installed Dirac again. Now the setup part didn't complain about a missing 64-bit MPI interface. All the tests with the --quick option passed (except the RELADC related ones, which were skipped, but luckily I don't need that part of the code).

Now I'm going back to do some more testing. I will let you know if something goes wrong.

Thank you a thousand times for your help both Stefan and Radovan! :)



Sincerely
Tuomas

Stefan Knecht

unread,
Feb 26, 2013, 8:52:55 AM2/26/13
to dirac...@googlegroups.com
dear Tuomas,

good news (for Dirac) and you :).
thanks a lot for not giving up, i am happy that you eventually found the problem.
lesson learned: if a simple hello_world.F90 runs there is a fairly good chance that more complex programs also work. ;)

with best regards,

stefan
--
You received this message because you are subscribed to the Google Groups "dirac-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dirac-users...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Radovan Bast

unread,
Feb 26, 2013, 10:43:20 AM2/26/13
to dirac...@googlegroups.com
hi Tuomas and Stefan,
great! i am glad to hear that the cmake test program
can be trusted and that it pointed us to the right solution.
best wishes,
radovan
Reply all
Reply to author
Forward
0 new messages