Cant Install Stable Diffusion on zimaOs with nvidia rx5060ti, it install the , but on starup it shows alway an error?
Cannot launch {name}
Please right-click on the dashboard and try switching it again
close
What can i Do? ,
Cant Install Stable Diffusion on zimaOs with nvidia rx5060ti, it install the , but on starup it shows alway an error?
Cannot launch {name}
Please right-click on the dashboard and try switching it again
close
What can i Do? ,
Can you see GPU Widgets on the ZimaOS dashboard now?
If you don’t see it, you’ll need to follow the dedicated driver package for NVIDIA 50xx series graphics cards. Then install Stable Diffusion.
Yes my Nividia is correct and shown in Zimaos, but the app wont start, tried to uninstall and reinstall ,always the same
This usually happens when the GPU shows in ZimaOS, but Docker can’t access the NVIDIA runtime, so Stable Diffusion can’t start.
sudo -i
cd /var/lib/extensions/
rm -f nvidia-open-kernel-*.raw
wget https://github.com/jerrykuku/staff/releases/download/v0.1.3/nvidia-open-kernel-580.105.08-linux-6.12.25.raw -O nvidia-open-kernel-580.105.08-linux-6.12.25.raw
systemd-sysext refresh
reboot
After reboot, confirm GPU + Docker GPU access:
nvidia-smi
docker run --rm --gpus all nvidia/cuda:12.4.1-base-ubuntu22.04 nvidia-smi
If the Docker test fails, SD won’t launch — post the SD container logs and we can pinpoint it.
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui | Launching launch.py…
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui | Using TCMalloc: libtcmalloc_minimal.so.4
icewhale-stable-diffusion-webui | Python 3.10.13 (main, Aug 25 2023, 13:20:03) [GCC 9.4.0]
icewhale-stable-diffusion-webui | Version: v1.7.0
icewhale-stable-diffusion-webui | Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
icewhale-stable-diffusion-webui | Launching Web UI with arguments: -f --xformers --listen --allow-code --api --enable-insecure-extension-access
icewhale-stable-diffusion-webui | /stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/_init_.py:173: UserWarning:
icewhale-stable-diffusion-webui | NVIDIA GeForce RTX 5060 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
icewhale-stable-diffusion-webui | The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
icewhale-stable-diffusion-webui | If you want to use the NVIDIA GeForce RTX 5060 Ti GPU with PyTorch, please check the instructions at
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
icewhale-stable-diffusion-webui | Loading weights [cc6cb27103] from /stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
icewhale-stable-diffusion-webui | Running on local URL:
icewhale-stable-diffusion-webui | To create a public link, set `share=True` in `launch()`.
icewhale-stable-diffusion-webui | Startup time: 11.5s (prepare environment: 1.9s, import torch: 3.1s, import gradio: 1.1s, setup paths: 1.6s, initialize shared: 0.4s, other imports: 0.8s, load scripts: 0.6s, create ui: 1.5s, gradio launch: 0.1s, add APIs: 0.3s).
icewhale-stable-diffusion-webui | Creating model from config: /stable-diffusion-webui/configs/v1-inference.yaml
icewhale-stable-diffusion-webui | loading stable diffusion model: RuntimeError
icewhale-stable-diffusion-webui | Traceback (most recent call last):
icewhale-stable-diffusion-webui | File “/usr/lib/python3.10/threading.py”, line 973, in _bootstrap
icewhale-stable-diffusion-webui | self._bootstrap_inner()
icewhale-stable-diffusion-webui | File “/usr/lib/python3.10/threading.py”, line 1016, in _bootstrap_inner
icewhale-stable-diffusion-webui | self.run()
icewhale-stable-diffusion-webui | File “/usr/lib/python3.10/threading.py”, line 953, in run
icewhale-stable-diffusion-webui | self._target(*self._args, **self._kwargs)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/initialize.py”, line 147, in load_model
icewhale-stable-diffusion-webui | shared.sd_model # noqa: B018
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/shared_items.py”, line 128, in sd_model
icewhale-stable-diffusion-webui | return modules.sd_models.model_data.get_sd_model()
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/sd_models.py”, line 531, in get_sd_model
icewhale-stable-diffusion-webui | load_model()
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/sd_models.py”, line 658, in load_model
icewhale-stable-diffusion-webui | load_model_weights(sd_model, checkpoint_info, state_dict, timer)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/sd_models.py”, line 399, in load_model_weights
icewhale-stable-diffusion-webui | model.half()
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/lightning_fabric/utilities/device_dtype_mixin.py”, line 98, in half
icewhale-stable-diffusion-webui | return super().half()
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1001, in half
icewhale-stable-diffusion-webui | return self._apply(lambda t: t.half() if t.is_floating_point() else t)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 797, in _apply
icewhale-stable-diffusion-webui | module._apply(fn)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 797, in _apply
icewhale-stable-diffusion-webui | module._apply(fn)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 797, in _apply
icewhale-stable-diffusion-webui | module._apply(fn)
icewhale-stable-diffusion-webui | [Previous line repeated 1 more time]
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 820, in _apply
icewhale-stable-diffusion-webui | param_applied = fn(param)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1001, in
icewhale-stable-diffusion-webui | return self._apply(lambda t: t.half() if t.is_floating_point() else t)
icewhale-stable-diffusion-webui | RuntimeError: CUDA error: no kernel image is available for execution on the device
icewhale-stable-diffusion-webui | CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
icewhale-stable-diffusion-webui | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
icewhale-stable-diffusion-webui | Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | Stable diffusion model failed to load
icewhale-stable-diffusion-webui | Applying attention optimization: Doggettx… done.
icewhale-stable-diffusion-webui | Exception in thread Thread-2 (load_model):
icewhale-stable-diffusion-webui | Traceback (most recent call last):
icewhale-stable-diffusion-webui | File “/usr/lib/python3.10/threading.py”, line 1016, in _bootstrap_inner
icewhale-stable-diffusion-webui | self.run()
icewhale-stable-diffusion-webui | File “/usr/lib/python3.10/threading.py”, line 953, in run
icewhale-stable-diffusion-webui | self._target(*self._args, **self._kwargs)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/initialize.py”, line 153, in load_model
icewhale-stable-diffusion-webui | devices.first_time_calculation()
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/devices.py”, line 166, in first_time_calculation
icewhale-stable-diffusion-webui | conv2d(x)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl
icewhale-stable-diffusion-webui | return forward_call(*args, **kwargs)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/extensions-builtin/Lora/networks.py”, line 501, in network_Conv2d_forward
icewhale-stable-diffusion-webui | return originals.Conv2d_forward(self, input)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py”, line 463, in forward
icewhale-stable-diffusion-webui | return self._conv_forward(input, self.weight, self.bias)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py”, line 459, in _conv_forward
icewhale-stable-diffusion-webui | return F.conv2d(input, weight, bias, self.stride,
icewhale-stable-diffusion-webui | RuntimeError: CUDA error: no kernel image is available for execution on the device
icewhale-stable-diffusion-webui | CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
icewhale-stable-diffusion-webui | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
icewhale-stable-diffusion-webui | Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | Mounted embeddings
icewhale-stable-diffusion-webui | Mounted .cache
icewhale-stable-diffusion-webui | Mounted styles.csv
icewhale-stable-diffusion-webui | Mounted ui-config.json
icewhale-stable-diffusion-webui | Mounted models
icewhale-stable-diffusion-webui | Mounted .cache
icewhale-stable-diffusion-webui | Mounted config_states
icewhale-stable-diffusion-webui | Mounted config.json
icewhale-stable-diffusion-webui | Mounted extensions
icewhale-stable-diffusion-webui | Mounted outputs
icewhale-stable-diffusion-webui | CPU_FALLBACK is enabled
icewhale-stable-diffusion-webui | GPU is available, running in GPU mode
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui | e[1me[32mInstall script for stable-diffusion + Web UI
icewhale-stable-diffusion-webui | e[1me[34mTested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.e[0m
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui | Running on e[1me[32mroote[0m user
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui | Repo already cloned, using it as install directory
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui | Create and activate python venv
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui | Launching launch.py…
icewhale-stable-diffusion-webui | ################################################################
icewhale-stable-diffusion-webui | Using TCMalloc: libtcmalloc_minimal.so.4
icewhale-stable-diffusion-webui | Python 3.10.13 (main, Aug 25 2023, 13:20:03) [GCC 9.4.0]
icewhale-stable-diffusion-webui | Version: v1.7.0
icewhale-stable-diffusion-webui | Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
icewhale-stable-diffusion-webui | Launching Web UI with arguments: -f --xformers --listen --allow-code --api --enable-insecure-extension-access
icewhale-stable-diffusion-webui | /stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/_init_.py:173: UserWarning:
icewhale-stable-diffusion-webui | NVIDIA GeForce RTX 5060 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
icewhale-stable-diffusion-webui | The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
icewhale-stable-diffusion-webui | If you want to use the NVIDIA GeForce RTX 5060 Ti GPU with PyTorch, please check the instructions at
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
icewhale-stable-diffusion-webui | Loading weights [cc6cb27103] from /stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
icewhale-stable-diffusion-webui | Running on local URL: http://0.0.0.0:7860
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | To create a public link, set `share=True` in `launch()`.
icewhale-stable-diffusion-webui | Startup time: 10.2s (prepare environment: 2.5s, import torch: 2.4s, import gradio: 0.9s, setup paths: 1.5s, initialize shared: 0.3s, other imports: 0.7s, load scripts: 0.5s, create ui: 0.9s, gradio launch: 0.1s, add APIs: 0.2s).
icewhale-stable-diffusion-webui | Creating model from config: /stable-diffusion-webui/configs/v1-inference.yaml
icewhale-stable-diffusion-webui | loading stable diffusion model: RuntimeError
icewhale-stable-diffusion-webui | Traceback (most recent call last):
icewhale-stable-diffusion-webui | File “/usr/lib/python3.10/threading.py”, line 973, in _bootstrap
icewhale-stable-diffusion-webui | self._bootstrap_inner()
icewhale-stable-diffusion-webui | File “/usr/lib/python3.10/threading.py”, line 1016, in _bootstrap_inner
icewhale-stable-diffusion-webui | self.run()
icewhale-stable-diffusion-webui | File “/usr/lib/python3.10/threading.py”, line 953, in run
icewhale-stable-diffusion-webui | self._target(*self._args, **self._kwargs)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/initialize.py”, line 147, in load_model
icewhale-stable-diffusion-webui | shared.sd_model # noqa: B018
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/shared_items.py”, line 128, in sd_model
icewhale-stable-diffusion-webui | return modules.sd_models.model_data.get_sd_model()
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/sd_models.py”, line 531, in get_sd_model
icewhale-stable-diffusion-webui | load_model()
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/sd_models.py”, line 658, in load_model
icewhale-stable-diffusion-webui | load_model_weights(sd_model, checkpoint_info, state_dict, timer)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/sd_models.py”, line 399, in load_model_weights
icewhale-stable-diffusion-webui | model.half()
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/lightning_fabric/utilities/device_dtype_mixin.py”, line 98, in half
icewhale-stable-diffusion-webui | return super().half()
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1001, in half
icewhale-stable-diffusion-webui | return self._apply(lambda t: t.half() if t.is_floating_point() else t)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 797, in _apply
icewhale-stable-diffusion-webui | module._apply(fn)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 797, in _apply
icewhale-stable-diffusion-webui | module._apply(fn)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 797, in _apply
icewhale-stable-diffusion-webui | module._apply(fn)
icewhale-stable-diffusion-webui | [Previous line repeated 1 more time]
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 820, in _apply
icewhale-stable-diffusion-webui | param_applied = fn(param)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1001, in
icewhale-stable-diffusion-webui | return self._apply(lambda t: t.half() if t.is_floating_point() else t)
icewhale-stable-diffusion-webui | RuntimeError: CUDA error: no kernel image is available for execution on the device
icewhale-stable-diffusion-webui | CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
icewhale-stable-diffusion-webui | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
icewhale-stable-diffusion-webui | Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui |
icewhale-stable-diffusion-webui | Stable diffusion model failed to load
icewhale-stable-diffusion-webui | Applying attention optimization: Doggettx… done.
icewhale-stable-diffusion-webui | Exception in thread Thread-2 (load_model):
icewhale-stable-diffusion-webui | Traceback (most recent call last):
icewhale-stable-diffusion-webui | File “/usr/lib/python3.10/threading.py”, line 1016, in _bootstrap_inner
icewhale-stable-diffusion-webui | self.run()
icewhale-stable-diffusion-webui | File “/usr/lib/python3.10/threading.py”, line 953, in run
icewhale-stable-diffusion-webui | self._target(*self._args, **self._kwargs)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/initialize.py”, line 153, in load_model
icewhale-stable-diffusion-webui | devices.first_time_calculation()
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/modules/devices.py”, line 166, in first_time_calculation
icewhale-stable-diffusion-webui | conv2d(x)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1501, in _call_impl
icewhale-stable-diffusion-webui | return forward_call(*args, **kwargs)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/extensions-builtin/Lora/networks.py”, line 501, in network_Conv2d_forward
icewhale-stable-diffusion-webui | return originals.Conv2d_forward(self, input)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py”, line 463, in forward
icewhale-stable-diffusion-webui | return self._conv_forward(input, self.weight, self.bias)
icewhale-stable-diffusion-webui | File “/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py”, line 459, in _conv_forward
icewhale-stable-diffusion-webui | return F.conv2d(input, weight, bias, self.stride,
icewhale-stable-diffusion-webui | RuntimeError: CUDA error: no kernel image is available for execution on the device
icewhale-stable-diffusion-webui | CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
icewhale-stable-diffusion-webui | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
icewhale-stable-diffusion-webui | Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
icewhale-stable-diffusion-webui |
Hi mate, this log shows the real issue:
Your RTX 5060 Ti = CUDA capability sm_120, but the Stable Diffusion container’s PyTorch only supports up to sm_90, so it can’t run CUDA kernels on your GPU:
sm_120 is not compatible with the current PyTorch installation
RuntimeError: CUDA error: no kernel image is available for execution on the device
So this isn’t really a ZimaOS driver issue anymore — the built-in Stable Diffusion app image needs a newer PyTorch/CUDA build that includes sm_120 support.
In ZimaOS use Custom Install (or Compose Toolbox) and paste this:
services:
stable-diffusion-webui:
image: ghcr.io/ai-dock/stable-diffusion-webui:latest
container_name: stable-diffusion-webui
ports:
- "7860:7860"
volumes:
- /DATA/AppData/stable-diffusion:/workspace
environment:
- WEBUI_PORT=7860
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: unless-stopped
http://YOUR_ZIMA_IP:7860