With 6 HDD drives, why isn’t there an option in the ZimaOS WebUI to create Raid-6, only Raid-0/1/5?
It is no problem creating it via CLI (create an raid-5 with 5 disk in UI, use mdraid to add last drive and grow with level 6 instead of level 5). Most people will use large drives, I have 16TB drives. Raid-5 is just not cutting it, the chance of bit errors while rebuilding a failed drive is to big (relativly)… raid-5 “died” in 2009: Why RAID 5 stops working in 2009 | ZDNET
(the UI still says Raid-5 and the 6th drive has status of “used”:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdf[6] sde[5] sdd[3] sdc[2] sdb[1] sda[0]
62502989824 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
bitmap: 2/117 pages [8KB], 65536KB chunk
unused devices: <none>
root@ZimaCube:/# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Sep 26 00:09:05 2024
Raid Level : raid6
Array Size : 62502989824 (58.21 TiB 64.00 TB)
Used Dev Size : 15625747456 (14.55 TiB 16.00 TB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Sep 28 00:28:26 2024
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : zimaos:HDD-RAID
UUID : b176acff:5f61eb31:4381392e:a2cd7277
Events : 86695
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
5 8 64 4 active sync /dev/sde
6 8 80 5 active sync /dev/sdf
Do we need to send in a feature request or something?
Remember, this is not a commercial product. It’s main purpose (based on ZimaOS) is a home nas, with an assumed primary function of serving up general media.
Double parity means longer write times and loss of storage. Raid 5 is a much more suitable format for a home nas with this assumed primary function. Nobody needs a double parity for 35 seasons of the Simpsons as an example. Raid 5 is a perfect balance of protection and storage for its assumed primary function.
Does the Pro maybe advertise that it can do a bit more than that…sure. But it’s still a consumer product. Perhaps Raid 6 comes along later in the UI, but I think 99% of users are content with the current stock Raid options.
I hear what you are saying, but it is so easy to just add the feature. If you can do raid-5 in the ui, it’s equally simple to do raid-6. it could be under an advanced option for all I care. No need to hide what is available “out of the box = mdraid”. And some people doesn’t want to loose 35 seasons of Simpsons if they can avoid a bit less probability for that when one drive crashed; because one drive will crash, you just don’t know when.
It’s also a Pro version (I have the pro creative one) so why limit me? If it was purely home users, why add NVMe? IceWhale caters more to pro-sumers if you ask me with their thinkerboards (zimaboard/blade), so I don’t buy your reason
If I want to “loose” 2 drives out of 6 (16TB drives), which I gladly will for peace of mind, and separate backup of important stuff, not Simpsons, thats my choice. Just put up some info telling users what happens with Raid-6.
I’m hoping for raid-6 support in the future, hopefully soon… I now need to consider using manually configured raid-6 or recreate with raid-5 on 5 disks and wait for support in UI for raid-6.
In the mean time, I’ll play with 4x4TB NVMe in slot 7. There I am more happy to use raid-5 for instance, SSD/NVMe has shown to be more reliable, and thus less problem with bit errors when rebuilding.
Observation:
If I expand the raid 5 made of 5 of my 6 HDDs to a raid 6 with the last disk, one disk is removed/left out of the raid6 after a reboot. That’s not nice of ZimaOS. I can read and rebuild, but that takes 1-2 days and it’s gone again afgter next reboot.