It's important to be clear about your HPC's configurations. While this may work, it doesn't necessarily match the configuration required by your HPC. For example, in many cases, they require loading specific modules or using certain configurations, such as network interfaces etc. If your HPC doesn't have documentation, ask an administrator for support.
**************************************************************************************************************************************
#!/bin/bash
#SBATCH -N 1 # number of nodes
#SBATCH --ntasks-per-node=32 # number of tasks per-node
### #SBATCH --gres=gpu:A100-SXM4:1 # not required by gmx_MMPBSA
#SBATCH --time=7-00:00:00 # changed from 10min to 7 days
#SBATCH --partition=testp
#SBATCH --error=error_test.%J.err
#SBATCH --output=output_test.%J.out
echo "Starting at `date`"
echo "Running on hosts: $SLURM_NODELIST"
echo "Running on $SLURM_NNODES nodes."
echo "Running $SLURM_NTASKS tasks."
echo "Job id is $SLURM_JOBID"
echo "Job submission directory is : $SLURM_SUBMIT_DIR"
cd $SLURM_SUBMIT_DIR
#echo "DEBUG: Current directory is $(pwd)"
#ls -lh
# export UCX_NET_DEVICES=mlx5_0:1 # it's optional and it's related to the network communication
# If you have problems with OpenMPI, disable this and use the Ethernet
### export UCX_TLS=rc,cuda_copy,cuda_ipc,self,sm
# export OMPI_MCA_pml=ucx
# export OMPI_MCA_btl=^openib
# export GMX_ENABLE_DIRECT_GPU_COMM=1
# export GMX_CUDA_GRAPH=1
# source /opt/hpcx-v2.17.1-gcc-mlnx_ofed-ubuntu22.04-cuda12-x86_64/hpcx-init.sh
# hpcx_load
# source /opt/cuda-12.4/env.sh
# source /nlsasfs/home/groupiiiv/sarthakt/Softwares/plumed-2.9.2-installation/sourceme.sh
source /nlsasfs/home/groupiiiv/sarthakt/Softwares/gromacs-2023.2-installation/bin/GMXRC
source /nlsasfs/home/groupiiiv/sarthakt/softwares/gmx_MMPBSA/miniconda3/etc/profile.d/conda.sh
conda activate gmxMMPBSA
#############################################################################################
# ---- this is not required because you activated the gmxMMPBSA environment ---- #
#GMX_DIR=/nlsasfs/home/groupiiiv/sarthakt/Softwares/gromacs-2023.2-installation/bin/
#MPI_DIR=/opt/hpcx-v2.17.1-gcc-mlnx_ofed-ubuntu22.04-cuda12-x86_64/ompi/bin
# GMXMMPBSA_DIR=/nlsasfs/home/groupiiiv/sarthakt/softwares/gmx_MMPBSA/miniconda3/envs/gmxMMPBSA/bin
# $SLURM_NTASKS is the number of global tasks (nodes * ntasks-per-node), so if you configure 2 nodes, it will automatically change to 64
mpirun -np $SLURM_NTASKS gmx_MMPBSA -O -i
mmpbsa.in -cs md_0_10.tpr -ct md_0_10_center.xtc -ci index.ndx -cg 1 13 -cp topol.top -o FINAL_RESULTS_MMPBSA.dat -eo FINAL_RESULTS_MMPBSA.csv -do FINAL_DECOMP_MMPBSA.dat -deo FINAL_DECOMP_MMPBSA.csv -nogui
***********************************************************************************************************************************
Please verify that it works and let me know if you encounter any errors.