SO last year i opened this topic below, now i lost all the array again and i dont know why., i replaced just 1 disk and PUFF! why does it doesnt see the other 3? why it doesnt ask just to rebuild the array with the new disk? thanks for your help!
That’s definitely not how RAID5 should behave. Replacing one disk shouldn’t wipe the whole array like that.
From what you’re describing (and the screenshot), it doesn’t look like the data is gone, it looks like ZimaOS just isn’t recognising the array properly anymore. Especially with the changes in 1.6.0 around RAID detection, this feels more like a metadata / reassembly issue than an actual failure.
Main thing right now: don’t click anything like “initialize”, “enable”, or create a new array, that’s how data actually gets wiped.
Let’s first see what the system actually sees underneath the UI.
Can you run these in the Web Terminal and paste the full output:
lsblk -f
blkid
cat /proc/mdstat
That’ll tell us:
if all 4 drives are still detected
if the RAID info is still on the disks
whether the array is just inactive vs actually broken
Also worth saying, your setup (LSI 9217-8i in IT mode) is solid, so this is very unlikely to be hardware. This is almost certainly ZimaOS just not reassembling the array correctly.
You’ve got 3 drives that still belong to your old RAID (linux_raid_member), but the new/replaced drive (sdc) is showing as DDF / Dell RAID, not Linux RAID.
So basically the array isn’t “gone”, it’s just mismatched. ZimaOS can’t reassemble it because one disk is speaking a completely different RAID format.
That’s why nothing shows in /proc/mdstat and the UI says everything is missing.
Since you’ve already got a backup, this isn’t worth trying to repair, the clean move is just rebuild the array properly after wiping that disk.
For future, whenever you replace a drive, always wipe it first, otherwise old RAID metadata (like this Dell/DDF stuff) can confuse the whole array.
So yeah, not hardware, not your controller, just mixed metadata.