Since I upgraded to 1.5.3 yesterday, after the reboot my RAID NVMe storage is no longer getting mounted.
I was able to mount it manually and then added an entry into fstab but after a reboot the fstab file is being ignored and I still need to execute a mount -a from a shell.
Downgrading to 1.5.2 did not fix it.
This is what I see in the journal:
Dec 09 09:17:45 ZimaOS kernel: md: Autodetecting RAID arrays. Dec 09 09:17:46 ZimaOS systemd[1]: Mounting /media/RAID-Storage-2… Dec 09 09:17:47 ZimaOS systemd[1]: Mounted /media/RAID-Storage-2. Dec 09 09:17:48 ZimaOS zimaos-local-storage[3702]: /usr/bin/umount --force --verbose --quiet /media/RAID-Storage-2 Dec 09 09:17:48 ZimaOS systemd[1]: var-lib-casaos_data-.media-RAID\x2dStorage\x2d2.mount: Deactivated successfully. Dec 09 09:17:48 ZimaOS systemd[1]: DATA-.media-RAID\x2dStorage\x2d2.mount: Deactivated successfully. Dec 09 09:17:48 ZimaOS systemd[1]: media-RAID\x2dStorage\x2d2.mount: Deactivated successfully. Dec 09 09:17:48 ZimaOS zimaos-local-storage[3702]: /usr/bin/umount -l --force --verbose --quiet /media/RAID-Storage-2
What you’re seeing isn’t a RAID failure, the array is detected, mounted, and then immediately unmounted by zimaos-local-storage. That part of the system is responsible for enforcing how ZimaOS manages storage, and whenever it thinks a mount point was manually modified or doesn’t match the expected internal mapping, it will unmount it on boot.
In other words, the issue isn’t RAID or Btrfs. It’s ZimaOS’ storage manager stepping in and undoing your manual mount.
A few points that may help:
Manual /etc/fstab entries are ignored by design.
ZimaOS regenerates mount logic on each boot, so anything added manually will be overwritten or undone.
Your journal logs confirm the behaviour:
The system mounts /media/RAID-Storage-2 > then zimaos-local-storage forcibly unmounts it because it detects a mismatch with the internal database entry.
Downgrading doesn’t resolve it because the storage manager keeps the same behaviour across 1.5.x.
The RAID metadata itself looks perfectly healthy (as seen in your DB output).
What this means:
The underlying array is fine.
ZimaOS isn’t “failing” to mount it; it’s choosing to unmount it because the configuration doesn’t match its expected state.
This is internal logic that currently can’t be overridden safely from the user side.
Now that you have confirmed the ZimaOS storage manager is the issue, would you know how to go about correcting it? Also, what during the upgrade caused this?
I didn’t want to add an entry into /etc/fstab but that is the only way I managed to get my server back up and running.
I don’t think you’ve done anything wrong here – from your logs it really does look like ZimaOS’ storage manager is the one mounting the array and then immediately unmounting it again, so I believe this is more of an internal bug/regression than a RAID/Btrfs problem.
To your questions:
would you know how to go about correcting it?
From the user side there isn’t a clean way to “fix” the storage manager logic itself. ZimaOS keeps its own database of disks, RAID sets and mount points, and that’s what zimaos-local-storage is enforcing. If that internal mapping doesn’t match what’s on disk after the upgrade, the service simply tears the mount down – which is exactly what your journal shows.
Because that logic lives inside the storage daemon, I think the only proper fix has to come from IceWhale (either a patch or a manual adjustment to your metadata on their side). What I would suggest:
Generate a system report in ZimaOS and attach it to your thread / email.
Email community@icewhale.org and reference this topic, pointing out the exact lines where the array is mounted and then unmounted by zimaos-local-storage.
Ask them to confirm whether there was a change in RAID/volume validation around 1.5.3 and whether your array can be “re-imported” into the storage manager cleanly.
what during the upgrade caused this?
From the outside we can’t see the internal code, but I suspect the upgrade tightened how RAID metadata and mount points are validated (UUIDs, serials, DB entry, “is_default” flag, etc.). If anything about your existing array doesn’t match the new expectations 100%, the service seems to mark it as not valid for auto-mount and then unmounts it – even though the array itself is perfectly healthy.
For now, if your /etc/fstab entry is working and you’re comfortable with it, I’d treat that as a temporary workaround just to keep the server usable, but I wouldn’t consider it the final fix. Once the storage team adjusts the logic or fixes the bug, you should be able to remove the manual fstab entry again and let ZimaOS handle it natively.
So in short: your RAID is fine; I believe this is a ZimaOS storage-manager issue that needs dev attention rather than more tweaking on your side.
We have troubleshoot that the cause is that the value of the database fs_type is capitalized, causing the mount to fail, and it is expected to be fixed in the next release.
At present, you can run the following command on the command line to solve the mount failure problem , and you need to restore fstab manual.
sudo sqlite3 /var/lib/casaos/db/local-storage.db “UPDATE raids SET fs_type = ‘btrfs’ WHERE fs_type = ‘BTRFS’;”