RAID Storage not mounted after upgrade to 1.5.3

Since I upgraded to 1.5.3 yesterday, after the reboot my RAID NVMe storage is no longer getting mounted.

I was able to mount it manually and then added an entry into fstab but after a reboot the fstab file is being ignored and I still need to execute a mount -a from a shell.

Downgrading to 1.5.2 did not fix it.

This is what I see in the journal:

Dec 09 09:17:45 ZimaOS kernel: md: Autodetecting RAID arrays.
Dec 09 09:17:46 ZimaOS systemd[1]: Mounting /media/RAID-Storage-2…
Dec 09 09:17:47 ZimaOS systemd[1]: Mounted /media/RAID-Storage-2.
Dec 09 09:17:48 ZimaOS zimaos-local-storage[3702]: /usr/bin/umount --force --verbose --quiet /media/RAID-Storage-2
Dec 09 09:17:48 ZimaOS systemd[1]: var-lib-casaos_data-.media-RAID\x2dStorage\x2d2.mount: Deactivated successfully.
Dec 09 09:17:48 ZimaOS systemd[1]: DATA-.media-RAID\x2dStorage\x2d2.mount: Deactivated successfully.
Dec 09 09:17:48 ZimaOS systemd[1]: media-RAID\x2dStorage\x2d2.mount: Deactivated successfully.
Dec 09 09:17:48 ZimaOS zimaos-local-storage[3702]: /usr/bin/umount -l --force --verbose --quiet /media/RAID-Storage-2

Why is zimaos-local-storage unmounting it?

I am able to see it in the DB:

id name uuid uuids path mount_point level shortage devices serials is_default disk_number reshape dev_size status members fs_type
– -------------- ----------------------------------- ----- -------- --------------------- ----- -------- ------------------------- ------- ---------- ----------- ------- ------------- ------ ------------------------------------------------------------ -------
*1 RAID-Storage-2 4c00f19d:94811fbf:4bba3759:7d6c3857 /dev/md0 /media/RAID-Storage-2 1 0 /dev/nvme1n1 /dev/nvme2n1 0 2 0 2000263643136 ok [{“i”:87,“m”:“Lexar SSD NM610PRO 2TB”,“s”:"NFV403R000621P111 BTRFS *

  •                                                                                                                                                                                                           2","z":2000398934016,"r":false,"t":"nvme"},{"i":88,"m":"Lexa         *
    
  •                                                                                                                                                                                                           r SSD NM610PRO 2TB","s":"NFV403R000217P1112","z":20003989340         *
    
  •                                                                                                                                                                                                           16,"r":false,"t":"nvme"}]*
    

Any help is greatly appreciated.

What you’re seeing isn’t a RAID failure, the array is detected, mounted, and then immediately unmounted by zimaos-local-storage. That part of the system is responsible for enforcing how ZimaOS manages storage, and whenever it thinks a mount point was manually modified or doesn’t match the expected internal mapping, it will unmount it on boot.

In other words, the issue isn’t RAID or Btrfs. It’s ZimaOS’ storage manager stepping in and undoing your manual mount.

A few points that may help:

  1. Manual /etc/fstab entries are ignored by design.
    ZimaOS regenerates mount logic on each boot, so anything added manually will be overwritten or undone.
  2. Your journal logs confirm the behaviour:
    The system mounts /media/RAID-Storage-2 > then zimaos-local-storage forcibly unmounts it because it detects a mismatch with the internal database entry.
  3. Downgrading doesn’t resolve it because the storage manager keeps the same behaviour across 1.5.x.
  4. The RAID metadata itself looks perfectly healthy (as seen in your DB output).

What this means:

  • The underlying array is fine.
  • ZimaOS isn’t “failing” to mount it; it’s choosing to unmount it because the configuration doesn’t match its expected state.
  • This is internal logic that currently can’t be overridden safely from the user side.

I suggest reporting this specific behaviour to:
community@icewhale.org

1 Like

Hi, thank you for taking the time to reply!

Now that you have confirmed the ZimaOS storage manager is the issue, would you know how to go about correcting it? Also, what during the upgrade caused this?

I didn’t want to add an entry into /etc/fstab but that is the only way I managed to get my server back up and running.

Thanks,
Nick

Hi Nick,

You’re very welcome.

I don’t think you’ve done anything wrong here – from your logs it really does look like ZimaOS’ storage manager is the one mounting the array and then immediately unmounting it again, so I believe this is more of an internal bug/regression than a RAID/Btrfs problem.

To your questions:

would you know how to go about correcting it?

From the user side there isn’t a clean way to “fix” the storage manager logic itself. ZimaOS keeps its own database of disks, RAID sets and mount points, and that’s what zimaos-local-storage is enforcing. If that internal mapping doesn’t match what’s on disk after the upgrade, the service simply tears the mount down – which is exactly what your journal shows.

Because that logic lives inside the storage daemon, I think the only proper fix has to come from IceWhale (either a patch or a manual adjustment to your metadata on their side). What I would suggest:

  1. Generate a system report in ZimaOS and attach it to your thread / email.
  2. Email community@icewhale.org and reference this topic, pointing out the exact lines where the array is mounted and then unmounted by zimaos-local-storage.
  3. Ask them to confirm whether there was a change in RAID/volume validation around 1.5.3 and whether your array can be “re-imported” into the storage manager cleanly.

what during the upgrade caused this?

From the outside we can’t see the internal code, but I suspect the upgrade tightened how RAID metadata and mount points are validated (UUIDs, serials, DB entry, “is_default” flag, etc.). If anything about your existing array doesn’t match the new expectations 100%, the service seems to mark it as not valid for auto-mount and then unmounts it – even though the array itself is perfectly healthy.

For now, if your /etc/fstab entry is working and you’re comfortable with it, I’d treat that as a temporary workaround just to keep the server usable, but I wouldn’t consider it the final fix. Once the storage team adjusts the logic or fixes the bug, you should be able to remove the manual fstab entry again and let ZimaOS handle it natively.

So in short: your RAID is fine; I believe this is a ZimaOS storage-manager issue that needs dev attention rather than more tweaking on your side.


1 Like

Thank you again George for taking the time to provide such detailed response to my questions, it is greatly appreciated.

I took your suggestion and sent an email to community@icewhale.org - hopefully they will provide a permanent solution/fix.

Thanks,
Nick

1 Like

Can you go to the files path navigation bar, enter this path (/ZimaOS-HD/.log/casaos/local-storage.log), and send us the local-storage.log logs?

Hi, I have attached the log as requested.

Thanks

local-storage.log (562.1 KB)

We have troubleshoot that the cause is that the value of the database fs_type is capitalized, causing the mount to fail, and it is expected to be fixed in the next release.
At present, you can run the following command on the command line to solve the mount failure problem , and you need to restore fstab manual.

sudo sqlite3 /var/lib/casaos/db/local-storage.db “UPDATE raids SET fs_type = ‘btrfs’ WHERE fs_type = ‘BTRFS’;”

Apologies for the delay. This indeed fixed the issue.

Thank you!

1 Like