I have ollama and open webui on different computers so I thought why not on Zima OS
but is no documentation how to install a model and have open webui app see it, the ollama is the gpu app
one person said
(from SSH)
run sudo docker ps
copy the ID for ollama
run sudo docker exec
“docker exec” requires at least 2 arguments.
See ‘docker exec --help’.
If you want open webui works with a seperate Ollama, then this is helpful:
On ZimaOS, for now, you need to remove the current ollama or opwn WebUI first and install the nvidia GPU and at last you install the Open WebUi to leverage the GPU.
1、remove possible installed Open WebUI
2、install the GPU
3、install the Open WebUI
PS.
Removal of pre-installed open-webui app at first (to use the GPU) may be fixed in the future.
This doese not work, you have apps that do nothing, open webui you cannot add ollama models
I hope this helps someone else
I intalled linux mint , then made sure it recognized the Nvidia gpu
All through the linux terminal:
Chatgpt helped me get docker to use the gpu
then i installed casaos because zimaos is standalone and still not working i hope someday zima will let us install like casaos
anyways after that I installed with terminal ollama and then open webui
now in open webui i can download a model
@mytechweb_41856 - I have ‘Ollama-Nvidia’ and ‘Open WebUI’ apps on ZimaOS installed on an old Lenovo server. Ollama is using my 3060 GPU and Open WebUI sees the Ollama instance on the server. I can access the server from any browser on my LAN via Open WebUI port 3050 and select a model from Ollama in the Open WebUI interface. It all works very well for me for text inputs. I have not set up speech inputs as I’d need to set up certificates within my browser. Therefore, I have a separate machine with Home Assistant running on it that accesses the Ollama on the server for speech chat LLM queries.
If this is the sort of set up that you are after, let me know and I will share the settings I have in the hope that you may find them useful.
Regards, Denzil.