-
Notifications
You must be signed in to change notification settings - Fork 22
Conversation
btw, when testing building I had to update |
Thanks for the work. Do you think it will be better if we use "mpicc" to compile other than c? We just need to install open mpi or mpich2 in docker file. |
@licode I'm not sure if I understand your question correctly. You still need a C compiler in the toolchain to back |
I see. Vote for switch between open MPI and mpich. which conda-build version do you use? |
I think it's |
In |
I'm guessing we need
|
Yes I was right -- the bug fix to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work, @leofang! Looks good to me now.
# https://github.com/conda-forge/openmpi-feedstock/blob/1d6390794529ad80f5b3416fa2cb98882d1c95fa/recipe/meta.yaml | ||
|
||
{% set version = "3.1.3" %} | ||
{% set sha256 = "8be04307c00f51401d3fb9d837321781ea7c79f2a5a4a2e5d4eaedc874087ab6" %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's good the new version fixes the known bug.
@@ -23,6 +23,7 @@ requirements: | |||
- {{ compiler('c') }} | |||
- {{ compiler('cxx') }} | |||
- {{ compiler('fortran') }} | |||
- conda-build >=3.17.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! I tried to rebuild the Docker image for nsls2/debian-with-miniconda
with the newer version of conda-build, but it failed on some strange error. Will retry it later today. Anyway, your addition will fix it for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem. Is the docker image new as of now? I just tried building this commit with it and it works fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it wasn't updated to have conda-build >=3.17.0
, however it's a good practice to have this dependency explicitly. I'll try to rebuild our image so that we have the updated version of conda-build out-of-the-box.
I took the recipes from their conda-forge counterparts. Can be built with the Docker image
nsls2/debian-with-miniconda
. This will benefit the ptychography code and, interestingly, was requested by an HXN user earlier today, for supposedly different purpose.A little background just for the record: MPI is a de facto standard for inter-process communication, tailored for scientific computation, and vendors are free to provide their own MPI implementations that conform the MPI standard (v3.1 is the latest). Open MPI and MPICH are two of the most popular, open-sourced, implementations, and both are supported in conda-forge. Since the APIs are standardized, in principle one could swap the underlying MPI library with another without rebuilding the top-level applications (such as mpi4py here). In practice, however, not all vendors guarantee ABI compatibility, and so we need to build mpi4py for each supported MPI implementation.
The reason for supporting both MPI libraries is that one sometimes performs better than the other, or does not crash in certain occasions while the other does, so we need more than one implementation for easy verification and test.
As for why we need the metapackagempi
, my answer shifts to "I do not know". I originally thoughtmpi4py
depends onmpi
which in turn depends on the vendors, but I was wrong. Now I'm just following the current practice, which was summarized in the end of a two-year-long discussion conda-forge/staged-recipes#1501 (comment).UPDATE: the metapackage
mpi
is to ensure mutual exclusivity of MPI implementations; that is, Open MPI and MPICH cannot coexist. See conda-forge/staged-recipes#1501 (comment).I keep the dependencies on
c
,cxx
andfortran
compilers on build time for both MPIs (as they were in conda-forge) because on a second thought I don't think it'd hurt. Most use cases in our environment would probably not usempicc
,mpic++
, ormpifort
, but simply link to the MPI shared library. But if anything wrong with the runtime libraries is reported, this should be revisited.One thing we could play with is to build a version of
h5py
on top ofmpi4py
, see conda-forge/h5py-feedstock. But I do not know if this could benefit anything we're doing. (Long ago I tested loading images fromDatabrokerGPFS with thempio
driver but no speedup was observed --- possibly one process is enough to saturate the IO bandwidth?)