-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem importing bmtk.analyzer.compartment #327
Comments
Hi @moravveji, BMTK it-self does not directly call srun or mpirun. It uses standard mpi4py library which relies on your locally installed version of OpenMPI. We've ran large bmtk simulation using both Moab/Torque and Slurm, although how to actually execute them will be different for each cluster. One thing to try is to create a python script and run directly from the prompt using mpirun (or mpiexec), so $ mpirun -np 16 python my_bmtk_script.py Unfortunately, whatever you do will no longer be interactive, and I don't think you can start-up a shell using mpirun (or alteast I've never seen it done before). If you're using Moab I think you can use the Another option to try is using/compiling a different version of OpenMPI. If you access to anaconda, it might be worth creating a test environment and installing OpenMPI/MPICH2. I believe that when it installs it will try to find the appropriate workload manager options on the system, and if there is a slurm manager on your hpc, will install with PMI support. Although in my experience it doesn't always work, especially if slurm is installed in a non-standard way. |
Thanks @kaeldai for your comments.
So, the take home message is to avoid using |
I have
pip
installed BMTK version 1.0.8 on our HPC cluster, running on Rocky8 OS and with Intel Icelake CPUs.When I start an interactive job with 16 tasks, I fail to import the
bmtk.analyzer.compartment
package:I have built
BMTK/1.0.8-foss-2022b
(and all its dependencies) againstOpenMPI/4.1.4-GCC-12.2.0
module. However, this specific OpenMPI module is not built with Slurm support. That's why parallel applications which are launched usingsrun
would spit out the OPAL error message above.I would like to ask if there exists an environment variable to choose how the tasks would be launched? So that I can choose to use
mpirun
directly instead ofsrun
.The text was updated successfully, but these errors were encountered: