USPEX and SIESTA

468 views
Skip to first unread message

USPEX

unread,
Jun 2, 2012, 5:29:16 PM6/2/12
to us...@googlegroups.com
Dear users,

we would like to explain the special calculation regime that is available for SIESTA code (used in the test T13 of the new release). It is activated by using the following parameters:

0 : remoteRegime 
SIESTAlocal : whichCluster 

and adding & at the end of the commandExecutable line for SIESTA:

siesta < input.fdf > output &


This is a parallel calculation regime on a local machine. That means more than 1 CalcFolder could be created (6 in the example T13), however being a parallel regime it requires using crontab to start USPEX regularly.
USPEX will start the SIESTA calculations and quit - this is intended behavior. Then crontab (or user) should start USPEX, for example, every 10 minutes. When any calculation is finished in some CalcFolder, the new one will be started there.

As an alternative, one could use nonParallel regime as in all other tests. For this you have to specify:

0 : remoteRegime 
nonParallel : whichCluster   

change the number of CalcFolders to 1:
1 : numParallelCalcs (how many parallel calculations shall be performed)

and remove the & at the end of the siesta line in the command Executable:

siesta < input.fdf > output

Sincerely yours,
 USPEX developer team

Daryn Benson

unread,
Jun 7, 2012, 3:37:40 AM6/7/12
to USPEX
Hello,

I am familiar with the workings of USPEX with VASP. However, the way
USPEX works with SIESTA is a bit confusing to me. What trully confuses
me is the way that USPEX works with SIESTA in this parallel mode. With
VASP the calculations are done in serial, ie. there is one relaxation,
then another and another each with more stringent relaxation
requirements until the needed relaxation parameters are reached. In
USPEX I can see that the T13_SIESTA example has 5 input SIESTA files
but has 6 parallel calculations. I can also see that each of the
relaxations has more stringent relaxation requirements. But I don't
understand how they can all work together at the same time in
Parallel. Clearly I am confused about this process!

Please if you have any insight into how this process that would help
me better understand how USPEX interfaces with VASP I would greatly
appreciate it!

Daryn.

Andriy Lyakhov

unread,
Jun 7, 2012, 9:11:38 PM6/7/12
to us...@googlegroups.com
Hello,

> But I don't understand how they can all work together at the same time in 
> parallel. Clearly I am confused about this process! 

6 calculation folders are used to relax 6 different structures. Like for VASP, it is done step by step: in CalcFolder1 we start step1 for structure A,  CalcFolder2 hosts step1 for structure B and so on. When the calculation for step1 in any folder is finished, step2 is done for that structure, etc. So, in essence, we parallelize not relaxation steps but structures. For every given structure, after CalcFolder is assigned, we do the same step by step relaxation as we do for VASP. In principle, one could employ similar mode for VASP calculations as well, assuming you got enough processors to run several VASP calculations. The problem with VASP is that one has to create a special script, which will 'tell' USPEX in some way that calculation is finished. And when I tested it, scripts on one machine didn't always work for other machine.

I would like to stress, that the main idea of such SIESTA regime it to use it on machines with more than 1 processor which do NOT have the job submission system and therefore you can not submit many jobs and track their execution easily. Since we got 6 such machines (with just 12 processors each), we developed this regime for ourselves to be able to do fast molecular crystal calculations there. SIESTA is started on background (thus & at the end of the executable line) and we track the execution using the command 'grep "End of run" output'.

Hope it helped.

Sincerely,
 Andriy

Daryn Benson

unread,
Jun 8, 2012, 2:56:39 AM6/8/12
to USPEX
Hello Andriy,

As usual thank you for the fast and informative reply. This did help a
lot!

Thank you!
Daryn.
Reply all
Reply to author
Forward
0 new messages