Description
🐛 Describe the bug
Reporting dmlc/xgboost#11500 to the torch team as well.
I'm on apple sillicon in a fresh environment that I just created.
uv venv --python 3.12
source .venv/bin/activate
uv pip install xgboost torch
Running the following seg faults:
import xgboost
import torch
M = torch.tensor([[2.0, 0.5],
[0.5, 1.0]],
dtype=torch.double, device="cpu")
L = torch.linalg.cholesky(M)
print("Cholesky succeeded:", L)
Switching the import order (torch before xgboost) fixes the problem. Also doing export OMP_NUM_THREADS=1
fixes the problem.
From my quick debugging session I think it's that xgboost loads libomp
from homebrew first while then torch loads it's own version. I don't know if torch or xgboost should do something differently here, I'm just opening the issue to raise awareness.
Versions
PyTorch version: 2.7.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.4.1 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.3)
CMake version: version 4.0.2
Libc version: N/A
Python version: 3.12.10 (main, Apr 8 2025, 11:35:47) [Clang 17.0.0 (clang-1700.0.13.3)] (64-bit runtime)
Python platform: macOS-15.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M4 Max
Versions of relevant libraries:
[pip3] Could not collect
[conda] No relevant packages