I am installing HOOMD GPU MPI version on the HPC cluster of my university. After starting an interactive session with requesting one GPU card, and loading the module cuda/8.0. gcc/6.3.0, cmake/3.5.2, and python/2.7.12, I compiled openmpi 2.1.1 with cuda locally, and complied fftw 3.3.6 enable mpi locally. Then I add the the following lines to ~/.bashrc and ~/.bash_profile, and do ". ~/.bashrc" to update the new path and dynamic library path.
export PATH=$HOME/software/openmpi_gpu/bin:$PATHexport LD_LIBRARY_PATH=$HOME/software/openmpi_gpu/lib:$LD_LIBRARY_PATHexport LD_LIBRARY_PATH=$HOME/software/fftw/lib:$LD_LIBRARY_PATHThen I used the command below to compile hoomd:cd hoomd-bluemkdir buildcd buildcmake ../ -DCMAKE_INSTALL_PREFIX=$HOME/programs/hoomd-blue/buildmake allmake installI met three problems during the procedures above:1. During "cmake ../ -DCMAKE_INSTALL_PREFIX=$HOME/programs/hoomd-blue/build": it returns that cannot find fftw, but I have fftw 3.3.6 compiled. Do I need fftw 2.2.5 instead?
2. During "make all": it always says"/nas/longleaf/home/minzhi/hoomd-blue/hoomd/extern/kernels/segreducecsr.cuh:394:8: warning: ‘auto_ptr’ is deprecated (declared at /usr/include/c++/4.8.2/backward/auto_ptr.h:87) [-Wdeprecated-declarations]CudaContext& context)"But I am using gcc/6.3.0 which is >4.8.5
May I have your suggestions about this issue?Besides, if I install an anaconda2 on the HPC cluster, and then I load one GPU card, and with all the modules described loaded above, and have the openmpi_gpu and fftw 3.3.6 compiled locally, will I have an ENABLE_MPI_CUDA version of HOOMD by conda install command as below? Or if I want ENABLE_MPI_CUDA version of HOOMD, I must compile by myself?
$ conda config --add channels glotzer$ conda install hoomdThank you very much.Best regards,Wusheng
--
You received this message because you are subscribed to the Google Groups "hoomd-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.
To post to this group, send email to hoomd...@googlegroups.com.
Visit this group at https://groups.google.com/group/hoomd-users.
For more options, visit https://groups.google.com/d/optout.
Wusheng,
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users+unsubscribe@googlegroups.com.
To post to this group, send email to hoomd-users@googlegroups.com.
Visit this group at https://groups.google.com/group/hoomd-users.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "hoomd-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users+unsubscribe@googlegroups.com.
cmake ../-D ENABLE_MPI=ON and -DENABLE_CUDA=ON -DCMAKE_INSTALL_PREFIX=$HOME/programs/hoomd-blue/build
Wusheng,
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.
To post to this group, send email to hoomd...@googlegroups.com.
Visit this group at https://groups.google.com/group/hoomd-users.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "hoomd-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.
And, auto_ptr was deprecated in newer versions of gcc, and you're using a very recent version, which is why Jena said you can safely ignore this.
Regard,
Mike
Wusheng,
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users+unsubscribe@googlegroups.com.
To post to this group, send email to hoomd-users@googlegroups.com.
Visit this group at https://groups.google.com/group/hoomd-users.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "hoomd-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users+unsubscribe@googlegroups.com.
module load cuda/8.0
module load cmake/3.5.2
module load openmpi_gcc/local_4.8.5_cuda8.0 #this is the openmpi compiled with cuda 8.0 and gcc 4.8.5
conda install sphinx pkg-config sqlite
git clone https://bitbucket.org/glotzer/hoomd-blue
cd hoomd-blue
mkdir build
cd build
cmake ../ -DCMAKE_INSTALL_PREFIX=/nas/longleaf/home/minzhi/anaconda2/lib/python2.7/site-packages -DENABLE_MPI=ON -DENABLE_CUDA=ON -DENABLE_MPI_CUDA=ON
make -j1
make install
Then save set the python path in the .bashrc, and activate it.
export PYTHONPATH=$PYTHONPATH:/nas/longleaf/home/minzhi/anaconda2/lib/python2.7/site-packages
Then I tried to submit a job to run the example lj.py shown in your website on 2 GPU cards, and two CPUs. What was printed on the screen is
HOOMD-blue v2.2.0-2-g75f97d0 CUDA (8.0) DOUBLE HPMC_MIXED MPI MPI_CUDA SSE SSE2
Compiled: 09/15/2017
Copyright 2009-2017 The Regents of the University of Michigan.
-----
You are using HOOMD-blue. Please cite the following:
* J A Anderson, C D Lorenz, and A Travesset. "General purpose molecular dynamics
simulations fully implemented on graphics processing units", Journal of
Computational Physics 227 (2008) 5342--5359
* J Glaser, T D Nguyen, J A Anderson, P Liu, F Spiga, J A Millan, D C Morse, and
S C Glotzer. "Strong scaling of general-purpose molecular dynamics simulations
on GPUs", Computer Physics Communications 192 (2015) 97--107
-----
notice(2): This system is not compute exclusive, using local rank to select GPUs
HOOMD-blue is running on the following GPU(s):
Rank 0: [0] GeForce GTX 1080 20 SM_6.1 @ 1.73 GHz, 8113 MiB DRAM
Rank 1: [1] GeForce GTX 1080 20 SM_6.1 @ 1.73 GHz, 8113 MiB DRAM
lj.py:006 | hoomd.init.create_lattice(unitcell=hoomd.lattice.sc(a=2.0, type_name='A'), n=10)
lj.py:006 | hoomd.init.create_lattice(unitcell=hoomd.lattice.sc(a=2.0, type_name='A'), n=10)
HOOMD-blue is using domain decomposition: n_x = 1 n_y = 1 n_z = 2.
1 x 1 x 2 local grid on 1 nodes
notice(2): Group "all" created containing 1000 particles
lj.py:008 | nl = md.nlist.cell()
lj.py:009 | lj = md.pair.lj(r_cut=3.0, nlist=nl)
lj.py:010 | lj.pair_coeff.set('A', 'A', epsilon=1.0, sigma=1.0)
lj.py:012 | all = hoomd.group.all();
lj.py:013 | md.integrate.mode_standard(dt=0.005)
lj.py:014 | hoomd.md.integrate.langevin(group=all, kT=1.2, seed=4)
notice(2): integrate.langevin/bd is using specified gamma values
lj.py:016 | hoomd.run(10e3)
notice(2): -- Neighborlist exclusion statistics -- :
notice(2): Particles with 0 exclusions : 1000
notice(2): Neighbors included by diameter : no
notice(2): Neighbors excluded when in the same body: no
** starting run **
Time 00:00:02 | Step 10000 / 10000 | TPS 4667.82 | ETA 00:00:00
Average TPS: 4667.69
---------
-- Neighborlist stats:
1046 normal updates / 34 forced updates / 0 dangerous updates
n_neigh_min: 7 / n_neigh_max: 40 / n_neigh_avg: 20.9752
shortest rebuild period: 7
-- Cell list stats:
Dimension: 5, 5, 3
n_min : 2 / n_max: 20 / n_avg: 11.0267
** run complete **
Thanks again.
Best regards,
Wusheng
To unsubscribe from this group and stop receiving emails from it, send an email to hoomd-users...@googlegroups.com.