RuntimeError: There is no registered Platform called "GPU"

895 views
Skip to first unread message

Sushil

unread,
Dec 15, 2017, 4:06:29 AM12/15/17
to Sire Users
Hi,

I am having an issue with installing the latest release (2017.1.0) precompiled binary of Siremol.  It seems to be a  Red Hat Enterprise Linux Server equipped with 4 K20 cards in each node. I installed siremol as "user" on my ~/home. 

I used cuda-7.5 and exported OpenMM_PLUGIN_DIR to /home/sushil/sire.app/lib/plugins. One I run the calculation, somd-freenrg there is a problem in finding OpenCL and GPU platform. e.g:

[sushil@gwacsg12 lam-1.00000]$ /home/sushil/sire.app/bin/somd-freenrg -C ../../00.core/sim.cfg -t ../../00.core/ligand.parm7 -c ../../00.core/ligand.rst7 -p GPU -d 3 -l 1.00000
Starting /home/sushil/sire.app/bin/somd-freenrg: number of threads equals 24

==============================================================
Sending anonymous Sire usage statistics to http://siremol.org.
For more information, see http://siremol.org/analytics
To disable, set the environment variable 'SIRE_DONT_PHONEHOME' to 1
To see the information sent, set the environment variable 
SIRE_VERBOSE_PHONEHOME equal to 1. To silence this message, set
the environment variable SIRE_SILENT_PHONEHOME to 1.
==============================================================


Loading configuration information from file ../../00.core/sim.cfg

Running a somd-freenrg calculation using files ../../00.core/ligand.parm7, ../../00.core/ligand.rst7 and ../../00.core/MORPH.charge.pert.
Using parameters:
===============
andersen == True
barostat == True
buffered coordinates frequency == 10000
center solute == True
constraint == allbonds
crdfile == ../../00.core/ligand.rst7
cutoff distance == 9 angstrom
cutoff type == cutoffperiodic
energy frequency == 100
equilibrate == True
equilibration iterations == 50000
gpu == 3
lambda_val == 1.0
minimize == True
morphfile == ../../00.core/MORPH.charge.pert
ncycles == 250
nmoves == 10000
platform == GPU
precision == mixed
save coordinates == True
topfile == ../../00.core/ligand.parm7
===============
### Running Single Topology Molecular Dynamics Free Energy on gwacsg12 ###
###================Setting up calculation=====================###
 Re-initialisation of OpenMMFrEnergyST from datastream 
Index GPU = 3 
Loaded a restart file on which we have performed 0 moves.
There are 3727 atoms in the group 
###===========================================================###

###======================Equilibration========================###
Running lambda equilibration to lambda=1.0.
Traceback (most recent call last):
  File "/bwfefs/home/sushil/sire.app/pkgs/sire-2017.1.0/share/Sire/scripts/somd-freenrg.py", line 146, in <module>
    OpenMMMD.runFreeNrg(params)
  File "/home/sushil/sire.app/lib/python3.5/site-packages/Sire/Tools/__init__.py", line 172, in inner
    retval = func()
  File "/home/sushil/sire.app/lib/python3.5/site-packages/Sire/Tools/OpenMMMD.py", line 1634, in runFreeNrg
    system = integrator.annealSystemToLambda(system, equil_timestep.val, equil_iterations.val)
RuntimeError: There is no registered Platform called "GPU"


Is this error "There is no registered Platform called "GPU" expected when GPU platform is found but it may some other problems? 

Ipython seems to find CUDA platform.

IPython 5.3.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import simtk.openmm as mm
   ...: platforms = [ mm.Platform.getPlatform(index).getName() for index in range(mm.Platform.getNumPlatforms()) ]
   ...: print (platforms)
   ...: ['Reference', 'CPU', 'CUDA', 'OpenCL']
   ...: 
['Reference', 'CPU', 'CUDA', 'OpenCL']
Out[1]: ['Reference', 'CPU', 'CUDA', 'OpenCL']

Also, when I test installation using testInstallation.py script, it finds CUDA platform but there is an error. I guess it can be related to the fact that I couldn't request GPU interactively under the current setup.   

There are 4 Platforms available:

1 Reference - Successfully computed forces
2 CPU - Successfully computed forces
3 CUDA - Error computing forces with CUDA platform
4 OpenCL - Error computing forces with OpenCL platform

CUDA platform error: Error initializing CUDA: CUDA_ERROR_NO_DEVICE (100) at /anaconda/conda-bld/work/platforms/cuda/src/CudaContext.cpp:93

OpenCL platform error: Error initializing context: clGetPlatformIDs (-1001)

Median difference in forces between platforms:

Reference vs. CPU: 1.98071e-05

some more information about the system:
[sushil@gwacsg12 lam-1.00000]$ cat /proc/version
Linux version 3.10.0-514.el7.x86_64 (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Wed Oct 19 11:24:13 EDT 2016

[sushil@gwacsg12 lam-1.00000]$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Tue_Aug_11_14:27:32_CDT_2015
Cuda compilation tools, release 7.5, V7.5.17


Can someone suggest a workaround to fix/figure out the problem? 

regards,
Sushil




Antonia Mey

unread,
Dec 15, 2017, 4:25:23 AM12/15/17
to Sire Users
Hi Sushil,

could you double check that your environment variable is called:
OPENMM_PLUGIN_DIR and not OpenMM_PLUGIN_DIR,

i.e 

export OPENMM_PLUGIN_DIR=/home/sushil/sire.app/lib/plugins 

using a bash shell. 

Also a you can quickly asses if you have all available platforms by running the following:

$: /home/sushil/sire.app/python

import simtk.openmm as mm   
print mm.Platform.getNumPlatforms()
platforms = [ mm.Platform.getPlatform(index).getName() for index in range(mm.Platform.getNumPlatforms()) ]    
print (platforms)

This should give you the names of all available platforms. 

If this doesn't solve the issues. Please let me know. 

Kind regards,
Antonia

Sushil Mishra

unread,
Dec 15, 2017, 9:04:49 AM12/15/17
to sire-...@googlegroups.com
Hi Antonia,

Many thanks. I am sorry, this issue was, probbaly, due to some conflicts between cuda7.5 and 8.0 when submitting the job through PBS.   Removing cuda8.0 before loading cuda 7.5 fixed the problem but now it tries to look for nvcc in /usr/local/cuda/bin/ which is not the place where nvcc exist on the cluster

[sushil@gwacsg01 lam-0.00000]$ nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver

Copyright (c) 2005-2015 NVIDIA Corporation

Built on Tue_Aug_11_14:27:32_CDT_2015

Cuda compilation tools, release 7.5, V7.5.17


[sushil@gwacsg01 lam-0.00000]$ /home/sushil/sire.app/bin/somd-freenrg -C ../../00.core/sim.cfg -t ../../00.core/ligand.parm7 -c ../../00.core/ligand.rst7 -p CUDA -d 0 -l 0.00000

Starting /home/sushil/sire.app/bin/somd-freenrg: number of threads equals 24


==============================================================

Sending anonymous Sire usage statistics to http://siremol.org.

For more information, see http://siremol.org/analytics

To disable, set the environment variable 'SIRE_DONT_PHONEHOME' to 1

To see the information sent, set the environment variable 

SIRE_VERBOSE_PHONEHOME equal to 1. To silence this message, set

the environment variable SIRE_SILENT_PHONEHOME to 1.

==============================================================



Loading configuration information from file ../../00.core/sim.cfg


Running a somd-freenrg calculation using files ../../00.core/ligand.parm7, ../../00.core/ligand.rst7 and ../../00.core/MORPH.charge.pert.

Using parameters:

===============

andersen == True

barostat == True

buffered coordinates frequency == 100

center solute == True

constraint == allbonds

crdfile == ../../00.core/ligand.rst7

cutoff distance == 9 angstrom

cutoff type == cutoffperiodic

energy frequency == 100

equilibrate == True

equilibration iterations == 50

gpu == 0

lambda_val == 0.0

minimize == True

morphfile == ../../00.core/MORPH.charge.pert

ncycles == 50

nmoves == 1000

platform == CUDA

precision == mixed

save coordinates == True

topfile == ../../00.core/ligand.parm7

===============

### Running Single Topology Molecular Dynamics Free Energy on gwacsg29 ###

###================Setting up calculation=====================###

 Re-initialisation of OpenMMFrEnergyST from datastream 

Index GPU = 0 

Loaded a restart file on which we have performed 0 moves.

There are 3727 atoms in the group 

###===========================================================###


###======================Equilibration========================###

Running lambda equilibration to lambda=0.0.

Traceback (most recent call last):

  File "/bwfefs/home/sushil/sire.app/pkgs/sire-2017.1.0/share/Sire/scripts/somd-freenrg.py", line 146, in <module>

    OpenMMMD.runFreeNrg(params)

  File "/home/sushil/sire.app/lib/python3.5/site-packages/Sire/Tools/__init__.py", line 172, in inner

    retval = func()

  File "/home/sushil/sire.app/lib/python3.5/site-packages/Sire/Tools/OpenMMMD.py", line 1634, in runFreeNrg

    system = integrator.annealSystemToLambda(system, equil_timestep.val, equil_iterations.val)

RuntimeError: Error launching CUDA compiler: 32512

sh: /usr/local/cuda/bin/nvcc: No such file or directory



[sushil@gwacsg29 lam-0.00000]$ which nvcc

/gwfefs/opt/x86_64/cuda/7.5/bin/nvcc



I wonder if the path for nvcc is added by sire and if I can assign it somehow? 





Thanking you,

Sushil



--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+unsubscribe@googlegroups.com.
To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

Antonia Mey

unread,
Dec 15, 2017, 9:42:57 AM12/15/17
to Sire Users
Hi Sushil,

you might have to set some appropriate CUDA environment variables in your PBS submission script then:

In my case it would be something like this simply in my bashrc file, but there will be an equivalent way to interactively do it at run time with your PBS. 

export CUDA_HOME=/usr/local/cuda-7.5

export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH

export PATH=${CUDA_HOME}/bin:${PATH}


Best,
Antonia

Antonia Mey

unread,
Dec 15, 2017, 10:13:39 AM12/15/17
to sire-...@googlegroups.com
Hi Sushil,

you might be able to simply do this before you submit your job in your shell 

export CUDA_HOME=/gwfefs/opt/x86_64/cuda/7.5
export LD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH
export PATH=${CUDA_HOME}/bin:${PATH}

and then add this line to your submission script before submitting:
#PBS -V

This should inherit the environment variables from your shell. If I understand the location of your cuda installation correctly. 

Best,
Antonia
To unsubscribe from this group and all its topics, send an email to sire-users+...@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

Dr Antonia Mey
University of Edinburgh
School of Chemistry
Joseph Black Building
Edinburgh
EH9 3FJ

Tel: +44 1316507748
Email: anton...@gmail.com




Sushil Mishra

unread,
Dec 15, 2017, 5:57:09 PM12/15/17
to sire-...@googlegroups.com
Thanks, Antonia. I did check cuda environment variables and they are set properly. 

[sushil@gwacsg15 lam-0.00000]$ echo $CUDA_HOME 

/gwfefs/opt/x86_64/cuda/7.5


[sushil@gwacsg15 lam-0.00000]$ echo $LD_LIBRARY_PATH 

/gwfefs/opt/x86_64/cuda/7.5/lib64:/gwfefs/opt/x86_64/cuda/7.5/lib64:/gwfefs/opt/x86_64/cuda/7.5/lib:/gwfefs/home/sushil/soft/mysoft/amber-acs/amber16/lib:/bwfefs/opt/x86_64/intel/impi/2017.3.196/intel64/lib:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/compilers_and_libraries_2017//linux/compiler/lib/intel64:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/compilers_and_libraries_2017//linux/compiler/lib/intel64_lin:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/compilers_and_libraries_2017//linux/mpi/intel64/lib:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/compilers_and_libraries_2017//linux/mpi/mic/lib:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/compilers_and_libraries_2017//linux/ipp/lib/intel64:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/compilers_and_libraries_2017//linux/compiler/lib/intel64:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/compilers_and_libraries_2017//linux/mkl/lib/intel64:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/compilers_and_libraries_2017//linux/tbb/lib/intel64/gcc4.4:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/debugger_2017/iga/lib:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/debugger_2017/libipt/intel64/lib:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/compilers_and_libraries_2017//linux/daal/lib/intel64_lin:/bwfefs/opt/x86_64/intel/parallel_studio_xe_2017/compilers_and_libraries_2017//linux/daal/../tbb/lib/intel64_lin/gcc4.4:/gwfefs/home/sushil/soft/mysoft/boost-1.63/build/lib


[sushil@gwacsg15 lam-0.00000]$ which nvcc

/gwfefs/opt/x86_64/cuda/7.5/bin/nvcc


[sushil@gwacsg15 lam-0.00000]$ nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver

Copyright (c) 2005-2015 NVIDIA Corporation

Built on Tue_Aug_11_14:27:32_CDT_2015

Cuda compilation tools, release 7.5, V7.5.17


[sushil@gwacsg15 lam-0.00000]$ /home/sushil/sire.app/bin/somd-freenrg -C ../../00.core/sim.cfg -t ../../00.core/ligand.parm7 -c ../../00.core/ligand.rst7 -p CUDA -d 0 -l 0.1

lambda_val == 0.1

minimize == True

morphfile == ../../00.core/MORPH.charge.pert

ncycles == 50

nmoves == 1000

platform == CUDA

precision == mixed

save coordinates == True

topfile == ../../00.core/ligand.parm7

===============

### Running Single Topology Molecular Dynamics Free Energy on gwacsg15 ###

###================Setting up calculation=====================###

New run. Loading input and creating restart

lambda is 0.1

Create the System...

Selecting dummy groups

Creating force fields... 

Setting up the simulation with random seed 550296

Setting up moves...

Created a MD move that uses OpenMM for all molecules on 0 

Generated random seed number 550296 

Saving restart

Setting up sim file. 

There are 3727 atoms in the group 

###===========================================================###


###======================Equilibration========================###

Running lambda equilibration to lambda=0.1.

Traceback (most recent call last):

  File "/bwfefs/home/sushil/sire.app/pkgs/sire-2017.1.0/share/Sire/scripts/somd-freenrg.py", line 146, in <module>

    OpenMMMD.runFreeNrg(params)

  File "/home/sushil/sire.app/lib/python3.5/site-packages/Sire/Tools/__init__.py", line 172, in inner

    retval = func()

  File "/home/sushil/sire.app/lib/python3.5/site-packages/Sire/Tools/OpenMMMD.py", line 1634, in runFreeNrg

    system = integrator.annealSystemToLambda(system, equil_timestep.val, equil_iterations.val)

RuntimeError: Error launching CUDA compiler: 32512

sh: /usr/local/cuda/bin/nvcc: No such file or directory



Do you think this is not related to Sire and it calls "nvcc" or $CUDA_PATH/bin/nvcc? Also, if I set -d to 1,2 or 3 (each node had 4 gpu cards), I get another error. 


[sushil@gwacsg15 lam-0.00000]$ nvidia-smi 

Sat Dec 16 07:53:17 2017       

+-----------------------------------------------------------------------------+

| NVIDIA-SMI 367.48                 Driver Version: 367.48                    |

|-------------------------------+----------------------+----------------------+

| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |

| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |

|===============================+======================+======================|

|   0  Tesla K20Xm         Off  | 0000:04:00.0     Off |                    0 |

| N/A   48C    P0   141W / 235W |    277MiB /  5699MiB |     98%      Default |

+-------------------------------+----------------------+----------------------+

|   1  Tesla K20Xm         Off  | 0000:09:00.0     Off |                    0 |

| N/A   32C    P8    30W / 235W |      0MiB /  5699MiB |      0%      Default |

+-------------------------------+----------------------+----------------------+

|   2  Tesla K20Xm         Off  | 0000:85:00.0     Off |                    0 |

| N/A   34C    P8    19W / 235W |      0MiB /  5699MiB |      0%      Default |

+-------------------------------+----------------------+----------------------+

|   3  Tesla K20Xm         Off  | 0000:88:00.0     Off |                    0 |

| N/A   49C    P0   143W / 235W |    183MiB /  5699MiB |     98%      Default |

+-------------------------------+----------------------+----------------------+

                                                                               

+-----------------------------------------------------------------------------+

| Processes:                                                       GPU Memory |

|  GPU       PID  Type  Process name                               Usage      |

|=============================================================================|

|    0     19056    C   pmemd.cuda                                     275MiB |

|    3     12438    C   pmemd.cuda                                     181MiB |

+-----------------------------------------------------------------------------+

[sushil@gwacsg15 lam-0.00000]$ /home/sushil/sire.app/bin/somd-freenrg -C ../../00.core/sim.cfg -t ../../00.core/ligand.parm7 -c ../../00.core/ligand.rst7 -p CUDA -d 1 -l 0.1

Starting /home/sushil/sire.app/bin/somd-freenrg: number of threads equals 24

Loading configuration information from file ../../00.core/sim.cfg


Running a somd-freenrg calculation using files ../../00.core/ligand.parm7, ../../00.core/ligand.rst7 and ../../00.core/MORPH.charge.pert.

Using parameters:

===============

andersen == True

barostat == True

buffered coordinates frequency == 100

center solute == True

constraint == allbonds

crdfile == ../../00.core/ligand.rst7

cutoff distance == 9 angstrom

cutoff type == cutoffperiodic

energy frequency == 100

equilibrate == True

equilibration iterations == 50

gpu == 1

lambda_val == 0.1

minimize == True

morphfile == ../../00.core/MORPH.charge.pert

ncycles == 50

nmoves == 1000

platform == CUDA

precision == mixed

save coordinates == True

topfile == ../../00.core/ligand.parm7

===============

### Running Single Topology Molecular Dynamics Free Energy on gwacsg15 ###

###================Setting up calculation=====================###

New run. Loading input and creating restart

lambda is 0.1

Create the System...

Selecting dummy groups

Creating force fields... 

Setting up the simulation with random seed 195923

Setting up moves...

Created a MD move that uses OpenMM for all molecules on 1 

Generated random seed number 195923 

Saving restart

Setting up sim file. 

There are 3727 atoms in the group 

###===========================================================###


###======================Equilibration========================###

Running lambda equilibration to lambda=0.1.

Traceback (most recent call last):

  File "/bwfefs/home/sushil/sire.app/pkgs/sire-2017.1.0/share/Sire/scripts/somd-freenrg.py", line 146, in <module>

    OpenMMMD.runFreeNrg(params)

  File "/home/sushil/sire.app/lib/python3.5/site-packages/Sire/Tools/__init__.py", line 172, in inner

    retval = func()

  File "/home/sushil/sire.app/lib/python3.5/site-packages/Sire/Tools/OpenMMMD.py", line 1634, in runFreeNrg

    system = integrator.annealSystemToLambda(system, equil_timestep.val, equil_iterations.val)

RuntimeError: Illegal value for CudaDeviceIndex: 1



Regards,

Sushil



Dr Antonia Mey
University of Edinburgh
School of Chemistry
Joseph Black Building
Edinburgh
EH9 3FJ

Tel: +44 1316507748
Email: anton...@gmail.com




Sushil Mishra

unread,
Dec 15, 2017, 8:39:11 PM12/15/17
to sire-...@googlegroups.com
Hi Antonia,

I seem to have found the issue in openMM code. If this code is same as in OpenMM repository, the file: openmm/platforms/cuda/src/CudaPlatform.cpp have to look for nvcc path. 

135  char* compiler = getenv("OPENMM_CUDA_COMPILER"); 
136  string nvcc = (compiler == NULL ? "/usr/local/cuda/bin/nvcc" : string(compiler));

It works file after setting OPENMM_CUDA_COMPILER environment variable to the right path. I don't know if it makes sense not to have default path in the code and rather keep the compiler string to "nvcc".

I am still having the error "RuntimeError: Illegal value for CudaDeviceIndex: 1" when -d is set to 1,2 or 3. I am actually using '-d $CUDA_VISIBLE_DEVICES' variable in PBS script to choose which GPU is allocated for the job.

Regards,
Sushil

Antonia Mey

unread,
Dec 17, 2017, 10:01:55 AM12/17/17
to Sire Users
Dear Sushil,

I am not sure if I can help with the trouble shooting here. Usually I don't specifically set the device ID when I run things, and the queuing system we use is slurm. 

It also seems to be an OpenMM associated problem. I am somewhat familiar with OpenMM, but probably the wrong person to trouble shoot this. 

Before trying to run the whole somdfreenrg calcualtion, it may be worth while trying your PBS script on the small code snippet for OpenMM I have suggested, specifically setting a device ID, since this should work the same way in somd-freenrg as it should in OpenMM. I'll have a bit more of a think to see if I can help further. 

Best,
Antonia

Sushil Mishra

unread,
Dec 18, 2017, 3:46:26 AM12/18/17
to sire-...@googlegroups.com
Hi Antonia,

Thank you so much. I am seeing some strange behavior now. Again, Sire cat detects  "GPU" platform on the same machine but different nodes. I guess, I made some mistake (probably rerunning sire over OpenCL/CPU calculation output) and it was running over those platforms, and not in GPUs. I really can't reproduce that workaround to make it work. Anyways, I am putting everything here once again, in case this issue is related to sire. I am using interactive node now, so its all dome with the same environment variables paths for each test. 

> Sire detect all 4 platforms in when using  testInstallation.py

[sushil@gwacsg02 test-gpu]$ ~/sire.app/bin/python ~/sire.app/lib/python3.5/site-packages/simtk/testInstallation.py
There are 4 Platforms available:

1 Reference - Successfully computed forces
2 CPU - Successfully computed forces
3 CUDA - Successfully computed forces
4 OpenCL - Successfully computed forces

Median difference in forces between platforms:

Reference vs. CPU: 1.97982e-05
Reference vs. CUDA: 2.15252e-05
CPU vs. CUDA: 1.55618e-05
Reference vs. OpenCL: 2.15122e-05
CPU vs. OpenCL: 1.55428e-05
CUDA vs. OpenCL: 1.32549e-07

> openMM detects GPU platform 

[sushil@gwacsg02 test-gpu]$ ~/sire.app/bin/python -c "import simtk.openmm as mm   
 print (mm.Platform.getNumPlatforms())
 platforms = [ mm.Platform.getPlatform(index).getName() for index in range(mm.Platform.getNumPlatforms()) ]    
 print (platforms)"
4
['Reference', 'CPU', 'CUDA', 'OpenCL']


> somd-freenrg cant detect GPU platform

[sushil@gwacsg02 test-gpu]$ /home/sushil/sire.app/bin/somd-freenrg -C ../00.core/sim-q.cfg -t ../00.core/ligand.parm7 -c ../00.core/ligand.rst7 -p GPU -d 0 -l 0.1
Starting /home/sushil/sire.app/bin/somd-freenrg: number of threads equals 24

Loading configuration information from file ../00.core/sim-q.cfg

Running a somd-freenrg calculation using files ../00.core/ligand.parm7, ../00.core/ligand.rst7 and ../00.core/MORPH.charge.pert.
Using parameters:
===============
andersen == True
barostat == True
buffered coordinates frequency == 100
center solute == True
constraint == allbonds
crdfile == ../00.core/ligand.rst7
cutoff distance == 9 angstrom
cutoff type == cutoffperiodic
energy frequency == 100
equilibrate == True
equilibration iterations == 5
gpu == 0
lambda array == (0.0, 0.0625, 0.125, 0.1875, 0.25, 0.3125, 0.375, 0.4375, 0.5, 0.5625, 0.625, 0.6875, 0.75, 0.8125, 0.875, 0.9375, 1.0)
lambda_val == 0.1
minimize == True
morphfile == ../00.core/MORPH.charge.pert
ncycles == 2
nmoves == 100
platform == GPU
precision == mixed
save coordinates == True
topfile == ../00.core/ligand.parm7
===============
### Running Single Topology Molecular Dynamics Free Energy on gwacsg02 ###
###================Setting up calculation=====================###
New run. Loading input and creating restart
lambda is 0.1
Create the System...
Selecting dummy groups
Creating force fields... 
Setting up the simulation with random seed 616538
Setting up moves...
Created a MD move that uses OpenMM for all molecules on 0 
Generated random seed number 616538 
Saving restart
Setting up sim file. 
There are 3727 atoms in the group 
###===========================================================###

###======================Equilibration========================###
Running lambda equilibration to lambda=0.1.
Traceback (most recent call last):
  File "/bwfefs/home/sushil/sire.app/pkgs/sire-2017.1.0/share/Sire/scripts/somd-freenrg.py", line 146, in <module>
    OpenMMMD.runFreeNrg(params)
  File "/home/sushil/sire.app/lib/python3.5/site-packages/Sire/Tools/__init__.py", line 172, in inner
    retval = func()
  File "/home/sushil/sire.app/lib/python3.5/site-packages/Sire/Tools/OpenMMMD.py", line 1634, in runFreeNrg
    system = integrator.annealSystemToLambda(system, equil_timestep.val, equil_iterations.val)
RuntimeError: There is no registered Platform called "GPU"


> It works from with OpenCL

[sushil@gwacsg02 test-gpu]$ /home/sushil/sire.app/bin/somd-freenrg -C ../00.core/sim-q.cfg -t ../00.core/ligand.parm7 -c ../00.core/ligand.rst7 -p OpenCL -d 0 -l 0.1
Starting /home/sushil/sire.app/bin/somd-freenrg: number of threads equals 24

Loading configuration information from file ../00.core/sim-q.cfg

Running a somd-freenrg calculation using files ../00.core/ligand.parm7, ../00.core/ligand.rst7 and ../00.core/MORPH.charge.pert.
Using parameters:
===============
andersen == True
barostat == True
buffered coordinates frequency == 100
center solute == True
constraint == allbonds
crdfile == ../00.core/ligand.rst7
cutoff distance == 9 angstrom
cutoff type == cutoffperiodic
energy frequency == 100
equilibrate == True
equilibration iterations == 5
gpu == 0
lambda array == (0.0, 0.0625, 0.125, 0.1875, 0.25, 0.3125, 0.375, 0.4375, 0.5, 0.5625, 0.625, 0.6875, 0.75, 0.8125, 0.875, 0.9375, 1.0)
lambda_val == 0.1
minimize == True
morphfile == ../00.core/MORPH.charge.pert
ncycles == 2
nmoves == 100
platform == OpenCL
precision == mixed
save coordinates == True
topfile == ../00.core/ligand.parm7
===============
### Running Single Topology Molecular Dynamics Free Energy on gwacsg02 ###
###================Setting up calculation=====================###
New run. Loading input and creating restart
lambda is 0.1
Create the System...
Selecting dummy groups
Creating force fields... 
Setting up the simulation with random seed 210329
Setting up moves...
Created a MD move that uses OpenMM for all molecules on 0 
Generated random seed number 210329 
Saving restart
Setting up sim file. 
There are 3727 atoms in the group 
###===========================================================###

###======================Equilibration========================###
Running lambda equilibration to lambda=0.1.

=======================================================###===========================================================###


Respecting your privacy - not sending usage statistics.
###====================somd-freenrg run=======================###
Please see http://siremol.org/analytics for more information.
Starting somd-freenrg run...
=======================================================
100 moves 2 cycles, 0.4 ps simulation time


Cycle =  1 


Cycle =  2 

Simulation took 7 s 
###===========================================================###

Clearing buffers...
Backing up previous restart
Saving new restart

> Now if I extend same calculation using -p GPU, then it works but I guess its actually reading OpenCL from restart files and ignoring -p GPU from command line

[sushil@gwacsg02 test-gpu]$ /home/sushil/sire.app/bin/somd-freenrg -C ../00.core/sim-q.cfg -t ../00.core/ligand.parm7 -c ../00.core/ligand.rst7 -p GPU -d 0 -l 0.1
Starting /home/sushil/sire.app/bin/somd-freenrg: number of threads equals 24

Loading configuration information from file ../00.core/sim-q.cfg

Running a somd-freenrg calculation using files ../00.core/ligand.parm7, ../00.core/ligand.rst7 and ../00.core/MORPH.charge.pert.
Using parameters:
===============
andersen == True
barostat == True
buffered coordinates frequency == 100
center solute == True
constraint == allbonds
crdfile == ../00.core/ligand.rst7
cutoff distance == 9 angstrom
cutoff type == cutoffperiodic
energy frequency == 100
equilibrate == True
equilibration iterations == 5
gpu == 0
lambda array == (0.0, 0.0625, 0.125, 0.1875, 0.25, 0.3125, 0.375, 0.4375, 0.5, 0.5625, 0.625, 0.6875, 0.75, 0.8125, 0.875, 0.9375, 1.0)
lambda_val == 0.1
minimize == True
morphfile == ../00.core/MORPH.charge.pert
ncycles == 2
nmoves == 100
platform == GPU
precision == mixed
save coordinates == True
topfile == ../00.core/ligand.parm7
===============
### Running Single Topology Molecular Dynamics Free Energy on gwacsg02 ###
###================Setting up calculation=====================###
 Re-initialisation of OpenMMFrEnergyST from datastream 
Index GPU = 0 
Loaded a restart file on which we have performed 200 moves.
There are 3727 atoms in the group 
###===========================================================###

###======================Equilibration========================###
Running lambda equilibration to lambda=0.1.

=======================================================###===========================================================###


Respecting your privacy - not sending usage statistics.
###====================somd-freenrg run=======================###
Please see http://siremol.org/analytics for more information.
Starting somd-freenrg run...
=======================================================
100 moves 2 cycles, 0.4 ps simulation time


Cycle =  3 


Cycle =  4 

Simulation took 9 s 
###===========================================================###

Clearing buffers...
Backing up previous restart
Saving new restart


Both, OpenCL and CUDA have error "RuntimeError: Illegal value for OpenCLDeviceIndex/CudaDeviceIndex" if it is not 0. I am again back to zero, since "GPU" platform is not detected. 

Best,
Sushil





 





Antonia Mey

unread,
Dec 18, 2017, 6:57:28 AM12/18/17
to sire-...@googlegroups.com
Hi Sushil,

I think I am following. 

There is no registered platform GPU. Indeed Sire uses the same platforms available to OpenMM, as the keyword is passed straight to OpenMM.

So you have the option -p OpenCL or -p CUDA in order to run simulations on a GPU. 

If you originally set -p to either of the two platforms any restart from the initial simulation will use the same platform as before and the default behaviour at the moment is that the -p flag will be ignored for restarts (maybe this behaviour isn’t optimal)

As for the device index, if you pass anything other than -d 0 you get an error is this correct?

Could you share the output of nvidia-smi with me? 

I’ll see if I can reproduce your error with our setup here. 

Best,
Antonia
--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+...@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

Sushil Mishra

unread,
Dec 18, 2017, 7:12:38 AM12/18/17
to sire-...@googlegroups.com
Hi Antonia,

Right, I was trying both -p OpneCL and -p CUDA options. Calculations using '-p OpneCL -d 0' runs fine, but I get an error for any other GPU index (1,2,3).  Option '-p CUDA' always fails with no GPU platform registered. 

This is the output of $nvidia-smi

[sushil@gwacsg09 test-gpu]$ nvidia-smi 

Mon Dec 18 21:07:04 2017       

+-----------------------------------------------------------------------------+

| NVIDIA-SMI 367.48                 Driver Version: 367.48                    |

|-------------------------------+----------------------+----------------------+

| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |

| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |

|===============================+======================+======================|

|   0  Tesla K20Xm         Off  | 0000:04:00.0     Off |                    0 |

| N/A   41C    P0   137W / 235W |    183MiB /  5699MiB |     99%      Default |

+-------------------------------+----------------------+----------------------+

|   1  Tesla K20Xm         Off  | 0000:09:00.0     Off |                    0 |

| N/A   34C    P0    74W / 235W |    214MiB /  5699MiB |     24%      Default |

+-------------------------------+----------------------+----------------------+

|   2  Tesla K20Xm         Off  | 0000:85:00.0     Off |                    0 |

| N/A   35C    P0    69W / 235W |    215MiB /  5699MiB |     22%      Default |

+-------------------------------+----------------------+----------------------+

|   3  Tesla K20Xm         Off  | 0000:88:00.0     Off |                    0 |

| N/A   29C    P8    31W / 235W |      0MiB /  5699MiB |      0%      Default |

+-------------------------------+----------------------+----------------------+

                                                                               

+-----------------------------------------------------------------------------+

| Processes:                                                       GPU Memory |

|  GPU       PID  Type  Process name                               Usage      |

|=============================================================================|

|    0     31080    C   pmemd.cuda                                     181MiB |

|    1     20885    C   ./namd2                                        212MiB |

|    2     20885    C   ./namd2                                        213MiB |

+-----------------------------------------------------------------------------+

[sushil@gwacsg09 test-gpu]$ 



Thanks,
Sushil



Antonia
To unsubscribe from this group and all its topics, send an email to sire-users+unsubscribe@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

Dr Antonia Mey
University of Edinburgh
School of Chemistry
Joseph Black Building
Edinburgh
EH9 3FJ

Tel: +44 1316507748
Email: anton...@gmail.com




--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+unsubscribe@googlegroups.com.

Antonia Mey

unread,
Dec 18, 2017, 9:32:35 AM12/18/17
to sire-...@googlegroups.com
Hi Sushil,
I am struggling to reproduce your problem. 

I can run the following just fine:
somd-freenrg -C ../../input/sim.cfg -c ../../input/SYSTEM.crd -t ../../input/SYSTEM.top -m ../../input/MORPH.pert -p CUDA -l 0.00 -d 2

and it successfully runs on GPU 2:
Mon Dec 18 12:01:11 2017       
+------------------------------------------------------+                       
| NVIDIA-SMI 352.39     Driver Version: 352.39         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 980 Ti  Off  | 0000:04:00.0      On |                  N/A |
| 22%   40C    P8    15W / 250W |     71MiB /  6142MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 980 Ti  Off  | 0000:05:00.0     Off |                  N/A |
| 22%   42C    P8    15W / 250W |     20MiB /  6143MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX 980 Ti  Off  | 0000:08:00.0     Off |                  N/A |
| 22%   50C    P2    88W / 250W |    126MiB /  6143MiB |     55%      Default |
+-------------------------------+----------------------+----------------------+
|   3  GeForce GTX 980 Ti  Off  | 0000:09:00.0     Off |                  N/A |
| 22%   43C    P8    15W / 250W |     20MiB /  6143MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

                                                                               

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1405    G   /usr/bin/X                                      49MiB |
|    2     30787    C   /home/ppxasjsm/sire.app_dev/bin/somd-freenrg   105MiB |
+——————————————————————————————————————+

I am not really sure what the issue could be. 

To unsubscribe from this group and all its topics, send an email to sire-users+...@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

Sushil Mishra

unread,
Dec 19, 2017, 2:51:02 AM12/19/17
to sire-...@googlegroups.com
Hi Antonia,

Never mind. It seems something very strange as I tried to debug it the whole day without any success.

I can confirm that the error ("RuntimeError: There is no registered Platform called "GPU")  remains despite all 4 platforms are found by "mm.Platform.getNumPlatforms"

Moreover, I am slightly confident that this error is not from openmm. I can successfully run openmm examples (HelloArgon etc..) placed in "/home/sushil/sire.app/pkgs/openmm-7.0.1-py35_0/share/openmm/examples". All these examples detect CUDA platform automatically!

[sushil@gwacsg01 examples]$ ./HelloArgon 
REMARK  Using OpenMM platform CUDA
MODEL     1
ATOM      1  AR   AR     1       0.000   0.000   0.000  1.00  0.00
ATOM      2  AR   AR     1       5.000   0.000   0.000  1.00  0.00


Anyways, I will probably stick to OpenCL if I can manage to request only first GPU in pbs scripts. I hope there are no serious drawbacks of using OpenCL on Nvidia cards.

Many thanks for all the efforts.
Sushil
  


Antonia Mey

unread,
Dec 19, 2017, 7:36:35 AM12/19/17
to sire-...@googlegroups.com
Hi Sushil,

this is correct. There is no registered platform called GPU, there is either CUDA or OpenCL, the default option does not exist and you need to pass either OpenCL or CUDA with the flag -p.

If you can run the OpenMM examples you can run on the CUDA platfrom with Sire. 
This should be what you have to use:

/home/sushil/sire.app/bin/somd-freenrg -C ../00.core/sim-q.cfg -t ../00.core/ligand.parm7 -c ../00.core/ligand.rst7 -p CUDA -d 0 -l 0.1

You are passing a MORPH.pert file and lambda array as well right and are not simply running a single simulation at lambda =0.1 right?

Best,
Antonia

--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+unsubscribe@googlegroups.com.
To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

Sushil Mishra

unread,
Dec 19, 2017, 8:49:52 AM12/19/17
to sire-...@googlegroups.com
Hi Antonia,

Thanks for pointing out the mistake. Sorry, it a stupid oversight on my part. Both CUDA and OpenCL work fine with '-d 0'.

I am passing a lambda array and morph.pert file in the input file sim-q.cfg. We did not use lambda array in an older version back in 2014, but I assigned lambda array as there a warning that reduced perturbed energies could not be written to the file. Does it affect the job execution procedure somehow? 

I plan to run separate simulations at each lambda value but having the lambda array assigned in morph.cfg file. Later I will use average gradients from each simulation for integration. Is this valid for the current setup?

Many thanks!
Sushil

Antonia Mey

unread,
Dec 19, 2017, 9:09:26 AM12/19/17
to sire-...@googlegroups.com
Hi Sushil,

yes that seems like a reasonable approach. You may want to consider using MBAR for you free energy analysis.

If you run into any issues with your analysis, please let me know. 
Have a look at this for an initial explanation:
This may be relevant to you.

Best,
Antonia

To unsubscribe from this group and all its topics, send an email to sire-users+...@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

Sushil Mishra

unread,
Dec 19, 2017, 11:23:38 PM12/19/17
to sire-...@googlegroups.com
Hi Antonia,

Many thanks for letting me know about the implementation of MBAR analysis. It may be interesting to compare TI and MBAR.  

I also manage to figure out the cudaDeviceIndex issue. It seems like OpenMM checks if the user did not provide any bad value in -d by checking the total number of GPU devices (numDevices) in the node. It gives this error if -d is < -1 or >= numDevices. 

In my case, I was requesting just 1 GPU card in the script, so the numDevices found by code is 1 and not 4 (calculating using cuDeviceCount ?). Thus, setting -d to 1, 2 or 3 resulted in this error. Interestingly, using -d 0 in each case runs the job in the allocated GPU card and not in the first card ( index "0"). So using -d 0 solves my problem. Though, I still need to check it carefully in several runs if this behavior is consistent and calculations are not going to mess up with anything going on GPU with index "0". 

Many thanks for helping me out.
Sushil

Christopher Woods

unread,
Dec 20, 2017, 12:30:16 AM12/20/17
to sire-...@googlegroups.com

Many queuing systems handle this for you, so if you ask for one GPU on a multi-GPU node, then the queuing system will dynamically map the GPU device ID of your allocated GPU to 0. This stops you from having to worry about it (as you shouldn’t have to choose your GPU if the queueing system has already allocated it).

The best way to check is to print the value of the env variable CUDA_AVAILABLE_DEVICES (if I remember correctly).

  Best wishes,

  Christopher 

To unsubscribe from this group and all its topics, send an email to sire-users+...@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.



--
Dr Antonia Mey
University of Edinburgh
School of Chemistry
Joseph Black Building
Edinburgh
EH9 3FJ





--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+...@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+...@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

Dr Antonia Mey
University of Edinburgh
School of Chemistry
Joseph Black Building
Edinburgh
EH9 3FJ

Tel: +44 1316507748
Email: anton...@gmail.com




--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+...@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Sire Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sire-users+...@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.
--
Sent from Gmail Mobile

Sushil Mishra

unread,
Dec 20, 2017, 8:24:28 PM12/20/17
to sire-...@googlegroups.com
Thanks for explaining the mapping feature. However, the env variable  $CUDA_VISIBLE_DEVICES (when asking an interactive node by PBS) was printing the actual GPU device ID of the GPU allocated for my job. That made me think to use $CUDA_VISIBLE_DEVICES as input for -d. 

Sincerely,
Sushil

To unsubscribe from this group and all its topics, send an email to sire-users+unsubscribe@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.



--
Dr Antonia Mey
University of Edinburgh
School of Chemistry
Joseph Black Building
Edinburgh
EH9 3FJ





--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+unsubscribe@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+unsubscribe@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

Dr Antonia Mey
University of Edinburgh
School of Chemistry
Joseph Black Building
Edinburgh
EH9 3FJ

Tel: +44 1316507748
Email: anton...@gmail.com




--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+unsubscribe@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Sire Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sire-users+unsubscribe@googlegroups.com.

To post to this group, send email to sire-...@googlegroups.com.
Visit this group at https://groups.google.com/group/sire-users.
For more options, visit https://groups.google.com/d/optout.
--
Sent from Gmail Mobile

--
You received this message because you are subscribed to a topic in the Google Groups "Sire Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to sire-users+unsubscribe@googlegroups.com.

Sushil Mishra

unread,
Jan 23, 2018, 12:17:17 AM1/23/18
to sire-...@googlegroups.com
Hi Antonia,

I was playing a bit with TI and MBAR analysis using analyse_freenrg.
Most of my perturbations I tried were having 16 lambda values, but it
seems like there is not enough overlap between some lambda windows.
The program complains:

#running mbar done ===============================================
/bwfefs/home/sushil/sire.app/pkgs/sire-2017.1.0/share/Sire/scripts/analyse_freenrg.py:576:
UserWarning: Off diagonal elements of the overlap matrix are smaller
than 0.03! Your free energy estiamte is not reliable!
'Off diagonal elements of the overlap matrix are smaller than 0.03!
Your free energy yes is 'Off diagonal elements of the overlap matrix
are smaller than 0.03! Your free energy yes is 'Off diagonal

I do see many zeros in the overlap matrix in these cases, so the
complaint in genuine. I wonder if it is possible to add additional
lambda windows (where required) separately in the current simulations
and analyze it (like we do in the case of TI) using new lambda array?
For TI as I can analyze gradients collected in gradients.dat file but
I don't see any easy workaround for MBAR.

The second question is about the function -p in analyse_freenrg. Is it
possible to skip or use a constant number of data points than some p%?

Sincerely,
Sushil
>>> sire-users+...@googlegroups.com.
>>> To post to this group, send email to sire-...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sire-users.
>>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>>
>> --
>> Dr Antonia Mey
>> University of Edinburgh
>> School of Chemistry
>> Joseph Black Building
>> Edinburgh
>> EH9 3FJ
>>
>> Tel: +44 1316507748
>> Email: anton...@gmail.com
>>
>>
>>
>>
>> --
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Sire Users" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/sire-users/15RVm6367aY/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to

Antonia Mey

unread,
Jan 23, 2018, 9:03:25 AM1/23/18
to sire-...@googlegroups.com
Hi Sushil,

how similar is your TI estimate and you MBAR estimate? That is usually also a good indication on how good or bad your simulations are. 
You are expected to see many 0’s in the overlap matrix, just ideally not in the first and second off diagonals. 
Adding additional lambda windows is possible, but a bit involved. 
From the top of my head I don’t think analyse_freenrg can handle this easily. Also your original simulations won’t have information on the bias energies for the additional windows. That in principle is not a problem and unless you saved a lot of the frames will be difficult to post compute. 

I’ll see if I can come up with an easy example of how to achieve additional lambda windows using analyse_freenrg. This is definitely something that would be good to support more easily. 
As for the skipping initial data points. If you use the analyse_freenrg mbar  - - discard number_of_frames option instead of -p you can discard a given set of initial frames from each trajectory. 


Let me know if this is what you were after. 

Regards,
Antonia 

Sushil Mishra

unread,
Jan 23, 2018, 9:49:58 PM1/23/18
to sire-...@googlegroups.com
Hi Antona, Many thanks.

On Tue, Jan 23, 2018 at 11:03 PM, Antonia Mey <anton...@gmail.com> wrote:
> Hi Sushil,
>
> how similar is your TI estimate and you MBAR estimate? That is usually also
> a good indication on how good or bad your simulations are.

They are similar in most of the cases, but in some of the
perturbations, I get a different profile. I thought to check if adding
additional lambda will improve agreement between TI and MBAR even in
short runs. For example, the PMF I attach here has different patterns.
I was interested in adding some more lambda values between 0.3 and 0.6
in this case. These simulations are short (~2ns each window) and
extending them might improve, but I wanted to be careful and check if
it would be possible to post-process original simulation for
additional lambda values.

> You are expected to see many 0’s in the overlap matrix, just ideally not in
> the first and second off diagonals.

I am safe then, in almost all the cases it is non-zero in 1st and 2nd
off-diagonal.

> Adding additional lambda windows is possible, but a bit involved.
> From the top of my head I don’t think analyse_freenrg can handle this
> easily. Also your original simulations won’t have information on the bias
> energies for the additional windows. That in principle is not a problem and
> unless you saved a lot of the frames will be difficult to post compute.
> I’ll see if I can come up with an easy example of how to achieve additional
> lambda windows using analyse_freenrg. This is definitely something that
> would be good to support more easily.

As of now, I would just go ahead either with rerunning everything or
with TI in these case. But, it may be great to have this feature as
adding additional lambda is often required in this kind of
calculations.

> As for the skipping initial data points. If you use the analyse_freenrg mbar
> - - discard number_of_frames option instead of -p you can discard a given
> set of initial frames from each trajectory.

This is what I needed. Thanks.

>
>
> Let me know if this is what you were after.
>
> Regards,
> Antonia
>
Thanking you,
Sushil
PMF-Kelm2-to-4c.png

Antonia Mey

unread,
Jan 29, 2018, 6:54:35 AM1/29/18
to sire-...@googlegroups.com
Hi Sushil,

sorry for the slow response, but I was on holiday. 

Based on the PMFs, MBAR does look like it needs better overlap. The TI curves do look quite smooth. 
For sure you can process the additional windows with TI.

Being able to add additional windows for MBAR is definitely something that should be supported easily. I’ll add it as a feature request for a future release, because I feel it needs a bit more than a quick hack in order to be able to do this. 

I think there is a way to do it now, but I will have to do a bit of testing to verify it.

Best,
Antonia
<PMF-Kelm2-to-4c.png>
Reply all
Reply to author
Forward
0 new messages