Okay, we will further enhance the functionality and robustness of RAID through version 1.2.2. Additionally, no matter how we optimize storage and disk management functions, user data will not be compromised. We will keep everyone updated on our feature iterations through the community and here, ensuring transparency of this information.
Sure. let me involve our OTA team leader to support.
Yes, the update domian is casaos.oss-cn-shanghai.aliyuncs.com.
Sorry, we can’t confirm the IP because we are using aliyun’s s3 storage service and can only determine the domain name at the moment.
I’ve also discussed with @Jerry about if we should migrate the images to servers in the US or Europe. In the short term, we may continue to listen further community needs. If there are more cases like yours, we will adjust the source of the image releases.
I wanted to post a personal update. For me to feel comfortable using the Cube, I needed to know that RAID5 works successfully: #1) for it to maintain data even if a drive dies and #2) to be able to add a new drive and everything returns to normal.
So with v1.2.3 installed, I did my test again from my original post.
I removed one of the drives with the Cube running. Look at that, my RAID5 stayed connected to my Mac and was still functional. #1 is a success!
I then erased the drive, reformatted it, and reinstalled it into the Cube.
This is where I’m running into trouble…
After 2-3 attempts of adding the drive to the RAID5, it says “RAID5 Checking” in the Storage window. This is where it should be re-populating the new drive with the RAID data so that the data is spread across all 6 drives again.
But after a few hours, this process fails and I get the red alert that the “RAID is damaged” and “add a new drive.” The drive shows up, but so far it has not been re-integrated into the RAID5 after 2-3 tires.
I’m now on my fourth try I think. I’ve reformatted the drive on my Mac twice now.
@blacksunix Do you remember which file system you reformatted the drive with when you used Mac and got your RAID options to work again? I remember the first time I did this test, my Mac formatting did not work for the RAID options. I had to use PC and create a volume. But this time I only had access to Mac.
@Zima Team:
When a drive is inserted into a ZimaCube RAID slot to be added into the RAID5, does ZimaOS automatically reformat the drive to its own needs? Or do I need to format it a certain way before the RAID can use it? It says it will “delete all data” on that drive, but I don’t think it’s done that.
When I took the drive out after the two failed RAID5 re-builds and connected it to my Mac, I’m pretty sure it was still named “Untitled”, which is the same name I had given it. It was also formatted the same as I had formatted it. I don’t think the drive was ever re-formatted even though ZimaOS tried to add it to the RAID several times.
Also, do you know what the line command is for checking the RAID status percentage?
@timothy I ended up using HFS+ bc the cube did not seem to understand APFS. I think . I might’ve used FAT32, but I don’t think so.
@timothy
Thank you very much for posting your personal usage, your information is very useful to IW and other users.
The above situation may be caused by unplugging the hard drive while the RAID5 checking
RAID5 takes about a few hours to a day to perform parity checking after completion of creation
Only when this check is completed is your data protected
Well, after about 15-20 hours on the “Checking” status, my RAID5 has been rebuilt and is now “in protection mode.”
So my requirement #2 has been a success. I now feel comfortable using ZimaCube for my data, knowing if a drive goes bad it’s no big issue.
I think the last change I made was reformatting the drive as GUID + ExFat, instead of with APFS. It seems like the unsupported file systems really trip up ZimaOS? I don’t know why the RAID Rebuild process didn’t successfully erase/format it.
(I have 11TB on the RAID with 6x12TB drives, so the parity check took a long time. I found a thread online where parity check took 36h for 100TB on 14TB drives. Apparently the slower+bigger the drives, the longer this process can take. In the future of ZimaOS, it would be nice if we could schedule parity checks to occur overnight / during off-time.)
Yes the length of disk checks is one of the reasons that filesystems that store large amounts of data are so conservative in their design and implementation.
When a problem with a disk is detected it is not a matter of just swapping out the disk and the problem is fixed. Instead after disk with the errors is replaced a process that places a large amount to stress on the entire array must run for 10s of hours. During this stress period, the likely hood of second problem occurring is greater than normal.
It will be interesting to see how IceWhale balances the want of ZimaCube purchasers with data preservation. There is always a spectrum between ‘cheap, fast, now’ and 100% reliability of the mission critical enterprise. As home users we tend to fall somewhere in the middle.
The way you describe it makes sense. I’m new to this whole RAID stuff so each step is a new experience…
For access vs reliability/security, a secondary precaution I’m taking is to use Crashplan to also backup my Cube + computers offsite.
Zima Team: One feature that I think should be implemented is an alert when a drive is lost/removed. ZimaOS said nothing when I pulled out a drive. Hopefully there’s an alert when one of the drives becomes un-healthy.
Also FYI: I was going to delete my RAID and start over with a new name, but when I clicked the Disable button, it just said “exit status 1” and failed.
Yes, Backup is an important consideration.
I personally tend to categorize my data into three tiers: crucial, important, and throwaway.
Crucial is the stuff that I just can’t lose. Important is the stuff I would rather not lose, but the world won’t end if I do lose it. Throwaway is the stuff that I can easily reproduce.
The vast majority of my data is throwaway. It is sensor data from robots that is also stored somewhere else, and I just have a local copy to use when running simulations after I have rewritten a piece of software to ensure that I have not caused regressions.
The other data is backed up locally in another NAS and sometimes remotely to a third NAS. If I lose the family pictures, my wife will probably kill me. If they go ‘offline,’ and it takes a day or so to restore across a relatively slow internet connection, it is not a problem.
There is never a 100% right way to do raid and backup… but there are ways that are most appropriate for you and your data.
The question of warnings we will add warnings in recent versions, and what kind of warnings you would like?
Alerts for:
- HDD health issues
- RAID health issues
- A drive in the RAID being disconnected/dying
- RAID parity check in progress
- RAID parity check failed
That’s what I can think of right now.
Edit:
- Temperature alerts