How to use Olllama models and open webui

I have ollama and open webui on different computers so I thought why not on Zima OS
but is no documentation how to install a model and have open webui app see it, the ollama is the gpu app
one person said
(from SSH)

run sudo docker ps
copy the ID for ollama
run sudo docker exec

“docker exec” requires at least 2 arguments.
See ‘docker exec --help’.

Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG…]

Execute a command in a running container

No one on the discord channel has helped or knows how to do this, and why is there no documentation

SSH reports also a lot things are not installed so what are the purpose of the apps if they don’t install

On windows or linux docker you can install both and in open webui olllama run modelname but not Zimaos app

I really hope someone can help maybe i need another NAS server Zima is not ready yet

i ashared the error

https://www.youtube.com/watch?v=88h0XVBNVlk Can this help you?

no when i try like he did to download a model in open webui nothing happens
also i noted his zima os sees his gpu mine doesnt

If you want open webui works with a seperate Ollama, then this is helpful:

On ZimaOS, for now, you need to remove the current ollama or opwn WebUI first and install the nvidia GPU and at last you install the Open WebUi to leverage the GPU.

1、remove possible installed Open WebUI
2、install the GPU
3、install the Open WebUI

PS.
Removal of pre-installed open-webui app at first (to use the GPU) may be fixed in the future.

This doese not work, you have apps that do nothing, open webui you cannot add ollama models

I hope this helps someone else
I intalled linux mint , then made sure it recognized the Nvidia gpu
All through the linux terminal:
Chatgpt helped me get docker to use the gpu
then i installed casaos because zimaos is standalone and still not working i hope someday zima will let us install like casaos
anyways after that I installed with terminal ollama and then open webui
now in open webui i can download a model

In fact, you can pull models directly in Open WebUI, which includes Ollama itself. This is shown in our video:

@mytechweb_41856 - I have ‘Ollama-Nvidia’ and ‘Open WebUI’ apps on ZimaOS installed on an old Lenovo server. Ollama is using my 3060 GPU and Open WebUI sees the Ollama instance on the server. I can access the server from any browser on my LAN via Open WebUI port 3050 and select a model from Ollama in the Open WebUI interface. It all works very well for me for text inputs. I have not set up speech inputs as I’d need to set up certificates within my browser. Therefore, I have a separate machine with Home Assistant running on it that accesses the Ollama on the server for speech chat LLM queries.
If this is the sort of set up that you are after, let me know and I will share the settings I have in the hope that you may find them useful.
Regards, Denzil.

1 Like