You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Solution to issue cannot be found in the documentation.
I checked the documentation.
Issue
Packages that optionally use cuda compilers and specify c_stdlib_version: 2.17 # [linux] in conda_build_config.yaml are still getting builds for cuda_compiler=None, c_stdlib_version=2.12. I'm guessing this relates to the complex zip_keys for cuda and c_stdlib_version not handling conda_build_config at highest priority. I'm not 100% certain it's cuda related, but I've seen it twice (openmpi and lammps), both of which use the cuda compilers and have no-cuda variants which still want to use 2.17.
This can be worked around in this specific case by specifying os_version: cos7 in conda-forge.yml, but presumably something is wrong in the rendering.
example: conda-forge/lammps-feedstock#198 which migrates from sysroot_linux-64 2.17 to c_stdlib_version pinning, which has no effect.
Likely less of an issue now that we are using GLIBC 2.17 by default
That said, the issue is that all associated zip_keys need to be defined. These come from here. Copying below for completeness
zip_keys:
# For CUDA, c_stdlib_version/cdt_name is zipped below with the compilers.
- # [linux and os.environ.get("CF_CUDA_ENABLED", "False") != "True"]
- c_stdlib_version # [linux and os.environ.get("CF_CUDA_ENABLED", "False") != "True"]
- cdt_name # [linux and os.environ.get("CF_CUDA_ENABLED", "False") != "True"]
- # [unix]
- c_compiler_version # [unix]
- cxx_compiler_version # [unix]
- fortran_compiler_version # [unix]
- c_stdlib_version # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
- cdt_name # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
- cuda_compiler # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
- cuda_compiler_version # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
- docker_image # [linux and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM", "").startswith("linux-")]
- # [win64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
- cuda_compiler # [win64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
- cuda_compiler_version # [win64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True"]
Agree that just defining os_version is an effective solution that bypasses this need
Ideally we would have some way to replace values for one key in zip_keys without needing to mess with the rest. Unfortunately we lack the tooling atm to do so (and this can be a bit hairy in some cases)
Solution to issue cannot be found in the documentation.
Issue
Packages that optionally use cuda compilers and specify
c_stdlib_version: 2.17 # [linux]
inconda_build_config.yaml
are still getting builds forcuda_compiler=None, c_stdlib_version=2.12
. I'm guessing this relates to the complex zip_keys for cuda and c_stdlib_version not handling conda_build_config at highest priority. I'm not 100% certain it's cuda related, but I've seen it twice (openmpi and lammps), both of which use the cuda compilers and have no-cuda variants which still want to use 2.17.This can be worked around in this specific case by specifying os_version: cos7 in conda-forge.yml, but presumably something is wrong in the rendering.
example: conda-forge/lammps-feedstock#198 which migrates from
sysroot_linux-64 2.17
toc_stdlib_version
pinning, which has no effect.Installed packages
Environment info
The text was updated successfully, but these errors were encountered: