Raid 5 expanding for 90 Hours

Hi there.

Just sharing some feedback here.
I recently installed ZimaOS 1.5 Plus, which I bought to support the Developers, to setup my DIY NAS.

Started with 5x 18TB disks in a Raid 5 config and all was fairly normal, until a few days ago I added 3 more 18TB disks and pressed Expand Raid.

Big mistake…
Expansion process is been going on for several days now, still only 44% done on an estimated 90+ Hours process.
My NAS is severely blottlenecked now in performance and I probably have 4 or 5 more days to go until this finishes, assuming this goes error free?
Apps like Plex that relly on large media files are buffering every 10 seconds when I watch a movie from the NAS, for example.

Surely this delay is not normal? Or did I do anything wrong? The disks are all fine without any bad sectors.
I have never seen anything like this on my other NAS system, like Unraid or TrueNAS.
Expansion would take a day, tops. Not even close to a week.

Thanks

The long RAID-5 expansion isn’t a user mistake, it’s mainly physics and software limits. When you add disks to a large array (in your case 8 × 18 TB), ZimaOS must rewrite parity across every block. That’s a massive I/O job, and if the system is also streaming Plex or other data, it slows even more.

A 90-hour estimate usually points to one or more of these:

CPU or RAM bottleneck from the software RAID process.

Drives or controller saturating under load.

Expansion running while the array is still in heavy use.

ZimaOS’s md-raid implementation being slower than Unraid/TrueNAS for live expansion.

The process should be left to finish without extra workloads. Once done, performance returns to normal. For future expansions, stop high-I/O apps like Plex and schedule the rebuild during idle time — it will cut the duration drastically.

What you should check/do now

Here are step-by-step suggestions:

  1. Pause non-essential load
  • If possible, stop Plex or heavy I/O on the NAS while the expansion is running. Give the rebuild/expand job full attention.
  • This will both speed up the process and reduce risk of corruption.
  1. Check system resource usage
  • Via monitoring tools (CPU, RAM, I/O wait) check if the CPU is pegged, or if drives are in heavy wait.
  • If CPU is 100% or I/O wait is high, then performance is constrained by hardware.
  1. Check the drive/controller health/logs
  • Use S.M.A.R.T. to verify all 8 drives are healthy and perform well.
  • Check the controller/HBA – ensure it’s operating in an optimal mode (AHCI or proper HBA mode, not consumer RAID).
  • Ensure no drives are entering degraded mode, queueing errors, etc.
  1. Check ZimaOS logs / RAID utility status
  • See whether the expansion job is still truly making progress (44% reported) or if progress has stalled.
  • Look for error messages in logs (e.g., /var/log/… or via SSH).
  • On the forum, folks have reported “stuck at x%” conditions especially with large arrays.
  1. Estimate expected time vs actual
  • If you have 8×18 TB drives, maybe you are dealing with >100 TB of raw capacity ( depending on parity ). Many hours may be reasonable, but days while the system suffers could point to sub-optimal configuration.
  • For reference: If scan speed is say 50 MB/s effective across the array, then to process 100 TB = ~2,000,000 MB/s → ~ 20,000 seconds → ~5.5 hours. But if speed is much lower (say 5 MB/s due to overhead/bottlenecks) then you can get 55 hours. So 90 + hours is on the high side but not impossible with slow hardware/loads.
  1. Consider waiting vs re-doing
  • If it’s still moving and you’ve validated no errors, you might just let it continue.
  • But if it’s stalled (no % change for a long time) or if you find that the hardware or config is badly under-spec’d, you may consider aborting (if supported) or planning a fresh rebuild with better hardware/config.
  1. Future: Plan for a rebuild/expand with minimal load
  • For future expansions, if possible:
    • Add drives in a maintenance window.
    • Stop non-essential services.
    • Ensure controller/disk health is verified before the operation.
    • If the OS supports it, consider doing the expansion offline (disconnected from heavy usage).
1 Like

Thanks for the reply.

I believe it is not the hardware resources.
Currently it shows very low usage.

I’m runnig a modern HBA card (LSI 9500) that worked well with my other NAS systems, like Unraid.

I only have a couple of Apps running, like Plex, but currently I have stopped all of them to see if it improves performance somehow.

Let’s see. I see no errors anywhere, just a massive waiting period.

Guess I just need to wait 2 or 3 more days :frowning:

Good sign overall, the system isn’t overloaded, and the HBA is handling the drives fine. What’s happening is simply the RAID expansion process itself being extremely I/O intensive, not CPU or RAM bound.

Even with an LSI 9500 and 126 GB of memory, the rebuild speed depends on how fast each disk can read/write in sequence. Eight × 18 TB drives means hundreds of terabytes of parity data to recalculate, so it’s perfectly normal to see 2–4 days total duration.

You’ve done the right thing by stopping Plex and letting it finish undisturbed. Once it completes, performance will return to normal, nothing wrong with your setup, just a long expansion window.

Thank You again.

I noticed that after stopping all Apps, it seems it is going a bit faster.
Maybe perception, but it jumped a couple % points faster than before.

Will report when finished.

That’s a good sign, stopping all apps definitely helps. Even lightweight background services can slow parity operations because every bit of I/O competes with the rebuild.

The few-percent jump you noticed isn’t just perception; with Plex and other containers idle, the disks can dedicate full bandwidth to the RAID process. At this rate it should keep progressing steadily now, best to just let it run uninterrupted until completion.

1 Like

Just sharing that since stopping all the apps, the % went from 44% to 90% in 12 hours.

Definitely a big improvement from before.
Probably will have this expansion finished later today.

1 Like

Just finished finally.

Word of advice to everyone, if you extend your raid, stop all apps and VM’s first.

It will go way faster.

1 Like

How long did it take in the end for you? A week?

5 days in total.

1 Like