Disk Damage - RAID 1 Resyncing(99.9%-0mins left)

Hi,
I’m not familiar with NAS and RAID things.
I have 2 6Tb Drives, tested dans working apparently well.
I made a RAID 1 in zimaos but I have strange messages :


(and I’m surprised because a few minutes ago, the resyncing was different (copy paste for the title :
RAID1 Resyncing(99.9%-0mins left) and this message was on screen for a long period of time)

I don’t really understand what is happening and if one drive has a problem

I installed Scrutinity to confirm SMART info :

All detailed infos looks good (passed for all items)

any idea in the community ?
Thanx for your help !

Hey, would you tell us the version of your OS and the specs of your machine, please?

Of course sorry about these basic element I missed.

ZimaOS v1.5.0
Running in ProxMox 2 cores (intel) and 16go ram.
Pcie pass through for my hba card with the 2 drives

I have to test adding 2 other drives in raid 1 in the same configuration.

For your info, here the message I have most of the time

Copy it. The team will investigate it.

1 Like

I made some investigations on my side
I don’t see any trouble but my disks are always spinning (no spin down or hibernate)
I think I will use some other OS to avoid problems on my disks :frowning:
(I have to check how to import RAID array to a non destructive NAS OS… lol)

root@ZimaOS:~# cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sda[0] sdb[1]
      5860390464 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  0.0% (1984/5860390464) finish=48836.5min speed=1984K/sec
      bitmap: 44/44 pages [176KB], 65536KB chunk

unused devices: <none>

 


root@ZimaOS:~# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Oct  1 20:31:04 2025
        Raid Level : raid1
        Array Size : 5860390464 (5.46 TiB 6.00 TB)
     Used Dev Size : 5860390464 (5.46 TiB 6.00 TB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Wed Oct 22 18:17:32 2025
             State : clean, resyncing 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

     Resync Status : 99% complete

              Name : zimaos:bf5f3a3fef16c1ee
              UUID : 0bf2d261:ba27a8cc:02a03fbd:fb0ad44e
            Events : 85472232

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb


root@ZimaOS:~# mdadm --misc --action=check /dev/md0
mdadm: Count not set action for /dev/md0 to check: Device or resource busy

 


root@ZimaOS:~# smartctl -a /dev/sda
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.12.25] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               IBM-E050
Product:              ST6000NM0014
Revision:             BC7B
Compliance:           SPC-4
User Capacity:        6,001,175,126,016 bytes [6.00 TB]
Logical block size:   4096 bytes
Formatted with type 2 protection
8 bytes of protection information per logical block
LU is fully provisioned
Rotation Rate:        7200 rpm
Form Factor:          3.5 inches
Logical Unit id:      0x5000c500869212af
Serial number:        Z4D4Z96A0000R726V88Q
Device type:          disk
Transport protocol:   SAS (SPL-4)
Local Time is:        Wed Oct 22 18:18:11 2025 EDT
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Enabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Grown defects during certification <not available>
Total blocks reassigned during format <not available>
Total new blocks reassigned <not available>
Power on minutes since format <not available>
Current Drive Temperature:     47 C
Drive Trip Temperature:        65 C

Accumulated power on time, hours:minutes 46275:50
Elements in grown defect list: 0

Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:   1389956493        0         0  1389956493          0    1944435.580           0
write:         0        0         0         0          0      85493.810           0
verify: 3977211041        0         0  3977211041          0      45028.923           0

Non-medium error count:       44

SMART Self-test log
Num  Test              Status                 segment  LifeTime  LBA_first_err [SK ASC ASQ]
     Description                              number   (hours)
# 1  Background short  Completed                   -       1                 - [-   -    -]

Long (extended) Self-test duration: 38632 seconds [10.7 hours]



 

root@ZimaOS:~# smartctl -a /dev/sdb
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.12.25] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               IBM-E050
Product:              ST6000NM0105  D
Revision:             BCC6
Compliance:           SPC-4
User Capacity:        6,001,175,126,016 bytes [6.00 TB]
Logical block size:   4096 bytes
Formatted with type 2 protection
8 bytes of protection information per logical block
LU is fully provisioned
Rotation Rate:        7200 rpm
Form Factor:          3.5 inches
Logical Unit id:      0x5000c500953dd17f
Serial number:        ZAD3QNQV0000C829DCVM
Device type:          disk
Transport protocol:   SAS (SPL-4)
Local Time is:        Wed Oct 22 18:18:15 2025 EDT
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Enabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Grown defects during certification <not available>
Total blocks reassigned during format <not available>
Total new blocks reassigned <not available>
Power on minutes since format <not available>
Current Drive Temperature:     46 C
Drive Trip Temperature:        65 C

Accumulated power on time, hours:minutes 46275:52
Elements in grown defect list: 0

Error counter log:
           Errors Corrected by           Total   Correction     Gigabytes    Total
               ECC          rereads/    errors   algorithm      processed    uncorrected
           fast | delayed   rewrites  corrected  invocations   [10^9 bytes]  errors
read:   3019683701        0         0  3019683701          0    2032748.898           0
write:         0        0         0         0          0     153876.335           0
verify: 1111958711        0         0  1111958711          0      44795.245           0

Non-medium error count:      875

SMART Self-test log
Num  Test              Status                 segment  LifeTime  LBA_first_err [SK ASC ASQ]
     Description                              number   (hours)
# 1  Background short  Completed                   -       1                 - [-   -    -]

Long (extended) Self-test duration: 38632 seconds [10.7 hours]


Yeah, this one’s pretty common, your setup actually looks spot on. Both drives are healthy, SMART looks clean, and [UU] means the RAID’s fully synced.

The “99.9% – 0 mins left” bit is just a little dashboard hiccup that can show up when running ZimaOS in Proxmox with PCI passthrough. The sync’s already done, the UI just doesn’t refresh properly at the end.

The drives staying awake for a while after that is normal too. ZimaOS keeps them active a bit longer after a rebuild. Once things settle, they’ll quiet down again.

If the message doesn’t clear on its own, a quick reboot of the VM or restart of the storage service usually fixes it straight away.

Ok. Good to know. Thanx.
I’m testing the drives with another VM just to be sure.
I tried turning off the VM (even rebooting the ProxMox node) with no success.
But my issue is the UI never go 100% sync, and the disks (tested at 2 and 3 and 4 am… yeah my nights are complicated) are always on duty.
I can live with an error message due to my setup :slight_smile: but looks strange to have the disks always On.

I think the best option for you is to mirror using ZFS directly from Proxmox and mount the disk on your ZimaOS VM.

RAID/Mirror in ZFS with Proxmox will be more secure if ZimaOS crash. Less steps is better :slight_smile: I use this method on my 7 ZimaOS servers and thats works #1 :slight_smile:

I will eventually try to create a ZFS pool in proxmox and mount it as a drive in ZimaOS VM.
Digging is always a way finding issues :slight_smile:
I finally discover a Default on one of my drives… a small D letter after the product name when executing: smartctl -a /dev/sdb

root@ZimaOS:~# smartctl -a /dev/sdb
smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.12.25] (local build)
Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               IBM-E050
Product:              ST6000NM0105  D
Revision:             BCC6
Compliance:           SPC-4

Thank you ALL for all the information you gave me, I learned a lot !
I’m doing a long SMART test on the drive to correct or identify the default. And maybe change the drive.
I’ll keep you posted.

Your RAID’s perfectly healthy and fully synced, this isn’t a ZimaOS issue but an old Linux mdadm quirk. The system used to pause at 99.9% while it was just tidying up the last blocks, even though the rebuild was already done. Linux later fixed it to say “Cleaning up” instead, but some setups still show the old 99.9% message.

I’ve got ZimaOS running on Proxmox with RAID too and haven’t seen it, so it likely depends on passthrough handling. The drives staying active is harmless, mdadm just keeps light background checks running.

All good on your end, just the classic Linux display bug.

Here are some references:

A technical article explaining “recovery and resync” behavior in Linux.

A real case discussion on “mdadm stalls at 99.9%” which helps show this is known behaviour.

The official mdadm manual page showing how RAID states are reported in Linux

https://www.man7.org/linux//man-pages/man8/mdadm.8.html

Linux kernel MD admin guide (kernel.org) – explains MD/RAID states and how /proc/mdstat reports resync/check/clean.

These show the behavior is at the Linux MD layer, not ZimaOS, and that upstream has touched the “finalization/cleanup → notify UI” path.

2 Likes

Nice find, Max!
That little “D” just marks a firmware or batch variation, nothing to worry about.

Good call running a long SMART test to be sure. And setting up a ZFS pool in Proxmox is a great way to learn how the layers work, keep us posted on how it goes!

2 Likes

LoL !
Trust humans and not Ai :smiley:
Perplexity told me the D letter is for Default on the drive.
The D appears here too

root@ZimaOS:~# lsscsi
[0:0:0:0]    disk    IBM-E050 ST6000NM0014     BC7B  /dev/sda 
[0:0:1:0]    disk    IBM-E050 ST6000NM0105  D  BCC6  /dev/sdb 

The SMART test is running for another 9hours.

Thank you @gelbuilding for bringing light on these issues (that are not issues in fact)

Haha nice one, Max!

Glad it all made sense in the end, good move running the long SMART test too!