Are there any plans to include mergerfs and snapraid so we can mix drive sizes luke in UNRAID? Thats the biggest thing missing in y opinion. Would love to vbe able to pool mixed drive sizes together and create parity via snapraid.
Hi, why do you want this feature since we have JBOD in the upcoming 1.4.2? Would you please tell us how you will use Mergerfs and SnapRAID in the real world/your workflow? And why are they irreplaceable? Thanks for your response. They are important for us to define our OS.
The biggest people reason want snapraid is the ability to use different sized disks AND have some level of protection - i looked at (read up on) it, it really is quite clever how they handle implicit parity without distributing parity in the traditional way
not sure i will ever use it - but i think a lot in the consumer space will - also its a premade ready to go solution that just needs some nice UI around it
you could be waiting a long time for the ZFS additions that will allow dissimilar disk sizes
also i think a lot of people mean different things by JBOD
i was there when the term was coined (pre 2000) and it just meant disks with no hardware raid and each disk had separate FS
however (and much to my displeasure)
JBOD now means to people not RAID and not ZFS but a single filesystem on top - this is what mergefs provides (i.e. don’t make regular folks have to know about drives, mount points, and combine into one apparent file system)
This is exactly the use case. After losing data to a failed Synology SHR, I am fully onboard SnapRaid for parity and Mergerfs to merge my 3TB, 6 TB, and 12 TB HDDs. RAID has its place, but not for the consumer that adds drives over the span of years. RAID is too inflexible to adding and swapping out drives.
thanks for validating, i was very anti this solution until i realized how the parity was actually working, its super clever and non-intutive, still has some limitations in terms of amount of parity space available, but generally seems to not be an issue
I’d second this feature request.
I’ve been using snapRAID with OpenMediaVault for a couple of years now. It’s basically like Unraid’s parity functionality but works with drives of different sizes, drives with data already on it and the benefits of having build-in bit-rot-protection.
See this comparison charts with some of the features.
EDIT: Just noticed that ZimaOS already uses mergerFS? At least according to this listing on the official Github and a post on this forum by the developer of mergerFS.
@Zima-Giorgio Could you please elaborate on the current status of mergerFS integration?
@TOMillr I’m not sure if ZimaOS uses mergerfs. CasaOS certainly did (and may still.)
@Zima-Giorgio It would be helpful if you described what “JBOD” means exactly as the term is extremely vague and has no defined implementation meaning.
As for usecase… as mentioned by others there are situations where you want a simple logical pool of individual filesystems. For “write once, read many, rarely changes, low/no replacement concern” data sets. mergerfs isn’t the only way to accomplish this but is a reasonably popular one. CasaOS’s “merge” mode uses mergerfs.
I once had an open and regular back and forth with some people at Ice Whale but haven’t heard from them in some time. We talked about deeper integration with CasaOS and then ZimaOS but it never went anywhere. I don’t really blame them as a focus on traditional RAID and such was more important.
I basically agree with what scyto is saying here.
I’m also interested with having integration with SnapRAID. I’m a sort of beginner with NAS and home storage so I’d like something easy to use. I don’t need all the features of a full on RAID, I’d like to just add disks if and when I need without too much trouble of reformatting etc. But I like the idea of some data protection with a parity drive. Mergerfs with this would be a cherry on top.
Hello,
I would also like a parity (redundant solution) for multiple different sizes of disk’s
The real world is:
I started wit 320GB drives then upgraded to 500 and 640 then 1 TB, 2, 4, 6 but as prices are high and income is not always available, the upgrade is happening 1 or 2 disks at a time,
My main NAS is an 8 bay ( back-up and main safe storage )
The Archive is a Terramaster with expansions - 14 disks total
The home LAB server ( UnRaid - currently - hoping to change ) has 24 sata drives 1TB and 2 NVME
(back-up on the NAS and Archive)
All the disk’s move from device to device, so when I by a disk for the NAS the replaced disk -if ok - moves to the Archive and from the Archive ( smallest disk ) moves to the lab storage.
So a real life scenario is i buy a 8 TB drive to replace a 6 TB drive from the NAS
the 6 TB drive replaces a 2 TB drive from the Archive
the 2 TB drive replaces a 1TB drive from the LAB
All drives have multiple years of service ( 4 to 10+ ) and moved from more than 4 devices in their lifespan.
The LAB is an old PC with HBA’s
The NAS is an 7 year old Synology
The Teramaster is 4 years old with 2 expansions 4 and 5 disks each ( older than the main unit F5-422 )
So all hardware procured piecemeal and second hand ( as budget allows )
@Zima-Giorgio
Any chance you might include snapRAID in a future release?
Just to irritate: A robust parity solution is THE missing feature in ZimaOS, imo. Especially for consumers looking for an easy way to setup a small media server. snapRAID is a proven CLI tool and would be a great addition to ZimaOS.
Presenting the actual drive to a ZVM OS would also allow the user to be able to implement MergeFS, snapRAID or Drivepool themselves and manage it all accordingly without any actual groundbreaking feature implementation on ZimaOS architecture too. Perhaps that seems to be a very low hanging fruit since ZVM runs on top of QEMU and you’d just give the user the freedom to access the actual device blocks @Zima-Giorgio ?
Just wanted to chime in here too. I would LOVE to give ZimaOS a shot (currently running Unraid) but not being able to use different sized disks AND have some level of protection (ideally 2 drives) is unfortunately a non-starter for me. REALLY hoping this option makes its way into Zima sooner than later! @Zima-Giorgio any potential for this to happen?
Would like to point out that nonraid (GitHub - qvr/nonraid: NonRAID - unRAID storage array kernel driver fork) is now available. A fork of unraid’s parity technology. I have no personal experience with it but combined with mergerfs it could become a very attractive alternative to unraid for these usecases. Especially with mergerfs’ upcoming release which will support io passthrough improving performance.
Now that’s a true game changer. If Zima team have this implemented they will have a clear path to ramp up the user base number quite quickly.
cc @ED209 for awareness as this is huge!
Interesting. Going to give that project a look over the weekend.
However, what’s actually the benefit of using this solution over SnapRAID? That it writes the parity-information in real time instead of on a scheduled cadence?
It doesn’t provide bit-rod protection and support for TRIM when used with SSDs like SnapRaid. Or did I miss something?
I got curious to see how CasaOS may use mergerfs - installed latest today in a VM and tried to set up the merge storages (“beta”) it hung on this screen… but it seems to be there. Unsure if they loaded mergerfs under the hood cuz it seems to have failed at setup.
Would like to point out that nonraid
This is fantastic. I am going to experiment with this “nonraid” kernel module and add bcache-linux see if I can finally get the promised “perfect home media NAS” for my next update to GitHub - TheLinuxGuy/free-unraid: Achieving a free (opensource) alternative to unraid for media server (homelab) use.
@Zima-Giorgio
Still would love to hear from you, if SnapRAID and mergerFS might make it’s way into a future release.
@trapexit
I’ve looked into this some more and my assertion seems to be correct. Just like on Unraid, there doesn’t seem to be any bit-rod protection and support for TRIM. ![]()
For me this would be a huge win for my secondary (larger) NAS where I store my static media assets. I don’t need the IOPS improvement that RAID might give me but I would like a JBOD that has some (eventual e.g. daily sync) resistance to drive failures. That’s because I’d like the disks not in use to spin down and go to sleep to help keep noise and power costs down and potentially (or debatably!) allow HDDs to live longer.
The additional selling point that ZimaOS can provide is to ensure that the SnapRAID syncs run off btrfs snapshots rather than a live file system without exposing that complexity to the end user. I think that would raise ZimaOS above OpenMediaVault as an appliance (no disparagement to OMV intended, it’s a fine product, just that you have to install plugins and 3rd party scripts).
