I have ollama and open webui on different computers so I thought why not on Zima OS
but is no documentation how to install a model and have open webui app see it, the ollama is the gpu app
one person said
(from SSH)
run sudo docker ps
copy the ID for ollama
run sudo docker exec
“docker exec” requires at least 2 arguments.
See ‘docker exec --help’.
If you want open webui works with a seperate Ollama, then this is helpful:
On ZimaOS, for now, you need to remove the current ollama or opwn WebUI first and install the nvidia GPU and at last you install the Open WebUi to leverage the GPU.
1、remove possible installed Open WebUI
2、install the GPU
3、install the Open WebUI
PS.
Removal of pre-installed open-webui app at first (to use the GPU) may be fixed in the future.
This doese not work, you have apps that do nothing, open webui you cannot add ollama models
I hope this helps someone else
I intalled linux mint , then made sure it recognized the Nvidia gpu
All through the linux terminal:
Chatgpt helped me get docker to use the gpu
then i installed casaos because zimaos is standalone and still not working i hope someday zima will let us install like casaos
anyways after that I installed with terminal ollama and then open webui
now in open webui i can download a model
@mytechweb_41856 - I have ‘Ollama-Nvidia’ and ‘Open WebUI’ apps on ZimaOS installed on an old Lenovo server. Ollama is using my 3060 GPU and Open WebUI sees the Ollama instance on the server. I can access the server from any browser on my LAN via Open WebUI port 3050 and select a model from Ollama in the Open WebUI interface. It all works very well for me for text inputs. I have not set up speech inputs as I’d need to set up certificates within my browser. Therefore, I have a separate machine with Home Assistant running on it that accesses the Ollama on the server for speech chat LLM queries.
If this is the sort of set up that you are after, let me know and I will share the settings I have in the hope that you may find them useful.
Regards, Denzil.
Thanks Animatco. Whilst not ‘cheap’ it was relatively inexpensive for the amount of opportunity that having the ZimaOS environment brings. The Lenovo P520 was about £250 used and a new RTX3060 was about £250 in the UK. ZimaOS installed flawlessly and has been running nearly 24/7 for the last 9 months. With the app library on ZimaOS pretty extensive and growing, there are ample opportunities to experiment with all sorts of home-labbing and self-hosting. I’ve moved my iCloud photos to Immich on my ZimaOS and have it all backed up on a separate drive. My music collection is in Jellyfin, again, all self-hosted. My notes are in Joplin. The in-built Files app is great and super fast to drag/drop and share large amounts of data between machines on my home network. Plus the Ollama AI stuff is super useful for everyday information from fixing my car, providing me with tech support, helping the kids with their homework etc. Not paying Apple £30/mth or OpenAI £20/mth means that the entire cost of the hardware and system is paid for within 10mths.
My only issue is that I’ve not been able to support the ZimaOS team by buying some hardware from them yet - I feel a bit bad about that! - but, when I can find a use case and have some cash free (that the kids haven’t got their hands on!) I’ll try and pick up one of their Zima blades to have a play with. Once you start this home-lab stuff, it’s pretty addictive and a lot easier to work with than trying to run Docker in a Linux CLI!
Good luck with your home-lab journey