-
Notifications
You must be signed in to change notification settings - Fork 93
[kubernetes] Pre-install NVIDIA GPU Driver #863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe Dockerfile for the Ubuntu container disk image in the Kubernetes application package was updated to include the installation of the NVIDIA driver package Changes
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
packages/apps/kubernetes/images/ubuntu-container-disk/Dockerfile
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: pre-commit
- GitHub Check: Build and Test
🔇 Additional comments (1)
packages/apps/kubernetes/images/ubuntu-container-disk/Dockerfile (1)
40-40
: No changes required for the comment line.
The# install nvidia driver
comment is clear and consistent with the surrounding Dockerfile.
@@ -37,6 +37,8 @@ RUN qemu-img resize image.img 5G \ | |||
&& guestfish --remote command "sed -i '/SystemdCgroup/ s/=.*/= true/' /etc/containerd/config.toml" \ | |||
# install kubernetes | |||
&& guestfish --remote command "apt-get install -y kubelet kubeadm" \ | |||
# install nvidia driver | |||
&& guestfish --remote command "apt-get install -y install nvidia-dkms-535" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the apt‑get install command typo and enhance GPU support.
The command currently reads:
apt-get install -y install nvidia-dkms-535
– the duplicated install
keyword will break the invocation. Remove the extra word. Additionally, consider installing the nvidia-driver-535
meta‑package (which pulls in the necessary userland components) and nvidia-container-toolkit
to enable GPU support in containers.
Minimal fix:
- && guestfish --remote command "apt-get install -y install nvidia-dkms-535" \
+ && guestfish --remote command "apt-get install -y nvidia-dkms-535" \
For full GPU driver + runtime support, you might instead do:
- && guestfish --remote command "apt-get install -y install nvidia-dkms-535" \
+ && guestfish --remote command "apt-get install -y nvidia-driver-535 nvidia-dkms-535 nvidia-container-toolkit" \
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
&& guestfish --remote command "apt-get install -y install nvidia-dkms-535" \ | |
&& guestfish --remote command "apt-get install -y nvidia-dkms-535" \ |
d0688d3
to
9786db4
Compare
9786db4
to
969268d
Compare
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
969268d
to
126a842
Compare
There is still some issues with driver load. It does not work with # k logs -n cozy-gpu-operator nvidia-container-toolkit-daemonset-4h9xn -f driver-validation
time="2025-04-24T07:21:03Z" level=info msg="version: b5479aaa-amd64, commit: b5479aa"
time="2025-04-24T07:21:03Z" level=info msg="Attempting to validate a pre-installed driver on the host"
time="2025-04-24T07:21:03Z" level=info msg="Attempting to validate a driver container installation"
time="2025-04-24T07:21:03Z" level=warning msg="failed to validate the driver, retrying after 5 seconds\n"
time="2025-04-24T07:21:08Z" level=info msg="Attempting to validate a driver container installation"
time="2025-04-24T07:21:08Z" level=warning msg="failed to validate the driver, retrying after 5 seconds\n"
time="2025-04-24T07:21:13Z" level=info msg="Attempting to validate a driver container installation"
time="2025-04-24T07:21:13Z" level=warning msg="failed to validate the driver, retrying after 5 seconds\n"
time="2025-04-24T07:21:18Z" level=info msg="Attempting to validate a driver container installation"
time="2025-04-24T07:21:18Z" level=warning msg="failed to validate the driver, retrying after 5 seconds\n"
time="2025-04-24T07:21:23Z" level=info msg="Attempting to validate a driver container installation"
time="2025-04-24T07:21:23Z" level=warning msg="failed to validate the driver, retrying after 5 seconds\n"
time="2025-04-24T07:21:28Z" level=info msg="Attempting to validate a driver container installation"
time="2025-04-24T07:21:28Z" level=warning msg="failed to validate the driver, retrying after 5 seconds\n"
time="2025-04-24T07:21:33Z" level=info msg="Attempting to validate a driver container installation" Same as with # k logs -n cozy-gpu-operator nvidia-driver-daemonset-qnck2 -f k8s-driver-manager
Getting current value of the 'nvidia.com/gpu.deploy.operator-validator' node label
Current value of 'nvidia.com/gpu.deploy.operator-validator=true'
Getting current value of the 'nvidia.com/gpu.deploy.container-toolkit' node label
Current value of 'nvidia.com/gpu.deploy.container-toolkit=true'
Getting current value of the 'nvidia.com/gpu.deploy.device-plugin' node label
Current value of 'nvidia.com/gpu.deploy.device-plugin=true'
Getting current value of the 'nvidia.com/gpu.deploy.gpu-feature-discovery' node label
Current value of 'nvidia.com/gpu.deploy.gpu-feature-discovery=true'
Getting current value of the 'nvidia.com/gpu.deploy.dcgm-exporter' node label
Current value of 'nvidia.com/gpu.deploy.dcgm-exporter=true'
Getting current value of the 'nvidia.com/gpu.deploy.dcgm' node label
Current value of 'nvidia.com/gpu.deploy.dcgm=true'
Getting current value of the 'nvidia.com/gpu.deploy.mig-manager' node label
Current value of 'nvidia.com/gpu.deploy.mig-manager='
Getting current value of the 'nvidia.com/gpu.deploy.nvsm' node label
Current value of 'nvidia.com/gpu.deploy.nvsm='
Getting current value of the 'nvidia.com/gpu.deploy.sandbox-validator' node label
Current value of 'nvidia.com/gpu.deploy.sandbox-validator='
Getting current value of the 'nvidia.com/gpu.deploy.sandbox-device-plugin' node label
Current value of 'nvidia.com/gpu.deploy.sandbox-device-plugin='
Getting current value of the 'nvidia.com/gpu.deploy.vgpu-device-manager' node label
Current value of 'nvidia.com/gpu.deploy.vgpu-device-manager='
Current value of AUTO_UPGRADE_POLICY_ENABLED=true'
Shutting down all GPU clients on the current node by disabling their component-specific nodeSelector labels
node/kubernetes-abcdef-md1-lsctb-7xqzz labeled
Waiting for the operator-validator to shutdown
Waiting for the container-toolkit to shutdown
pod/nvidia-container-toolkit-daemonset-g64fn condition met
Waiting for the device-plugin to shutdown
Waiting for gpu-feature-discovery to shutdown
Waiting for dcgm-exporter to shutdown
Waiting for dcgm to shutdown
Auto eviction of GPU pods on node kubernetes-abcdef-md1-lsctb-7xqzz is disabled by the upgrade policy
Unloading NVIDIA driver kernel modules...
nvidia_uvm 1785856 0
nvidia_drm 110592 0
nvidia_modeset 1699840 1 nvidia_drm
nvidia 11513856 2 nvidia_uvm,nvidia_modeset
drm_kms_helper 311296 1 nvidia_drm
drm 622592 4 drm_kms_helper,nvidia,nvidia_drm
Could not unload NVIDIA driver kernel modules, driver is in use
Auto drain of the node kubernetes-abcdef-md1-lsctb-7xqzz is disabled by the upgrade policy
Failed to uninstall nvidia driver components
Auto eviction of GPU pods on node kubernetes-abcdef-md1-lsctb-7xqzz is disabled by the upgrade policy
Auto drain of the node kubernetes-abcdef-md1-lsctb-7xqzz is disabled by the upgrade policy
Rescheduling all GPU clients on the current node by enabling their component-specific nodeSelector labels
node/kubernetes-abcdef-md1-lsctb-7xqzz labeled Also it significantly increases pipeline time, and decreases the user flexibility, so I would keep it as-is for now. |
Signed-off-by: Andrei Kvapil kvapss@gmail.com
Summary by CodeRabbit