NVMe internal copy looks capped ~600MB/s on ZimaOS? Here’s how to test properly (dd + fio + checklist)

Overview

Some users report “internal copy” on ZimaOS appearing capped at ~600–650 MB/s and assume “ZimaOS isn’t made for NVMe”.

In most cases, this speed is not an NVMe limitation. It usually comes down to:

  • the copy method being tested (especially the ZimaOS GUI / file manager internal copy pipeline), and/or
  • the workload type (many small files vs large files), and/or
  • filesystem behavior (metadata / CoW)

This post shows:

  • how to test real NVMe speed properly
  • why GUI internal copy can appear slower
  • a quick troubleshooting checklist

Expected speeds (quick reference)

Test / Interface Typical Real-World Speed
SATA III SSD ceiling ~500–600 MB/s
10GbE SMB transfer (Windows → NAS) ~900–1100 MB/s
NVMe raw sequential (dd/fio) ~1500–7000+ MB/s
NVMe internal copy (large files) ~1000–3000+ MB/s (depends on pool/filesystem)

If dd/fio show 2–3GB/s, ZimaOS is not “throttling NVMe to SATA speeds”.


Why GUI internal copy can be slower

Internal copy/move done via the ZimaOS Web UI / file manager is not the same as raw disk throughput.

Common reasons:

  • GUI copy often runs through a userspace service pipeline (progress tracking/safety), which may limit throughput
  • filesystem overhead (especially on Btrfs, metadata + CoW) can reduce internal copy speeds depending on the workload
  • some moves behave like copy + delete depending on subvolumes/filesystems

Known causes (small files vs large files)

This is the most common misunderstanding.

Large files (movies, ISOs, backups)

  • usually closest to max throughput
  • best for measuring real storage performance

Many small files (photos, app folders, project folders)

  • can be dramatically slower due to:
    • heavy metadata operations
    • CoW/checksums (Btrfs)
    • lots of file open/close operations
  • GUI copies amplify this effect

So a “600MB/s internal copy” result does not automatically mean NVMe is slow.


Correct way to test NVMe speed on ZimaOS (copy/paste)

1) Confirm filesystem / mount type

mount | grep -E "/DATA|btrfs|ext4|zfs"

2) Direct NVMe write test (10GB, bypass cache)

dd if=/dev/zero of=/DATA/testfile bs=1G count=10 oflag=direct status=progress

3) Direct NVMe read test

dd if=/DATA/testfile of=/dev/null bs=1G status=progress

4) Proper benchmark (fio sustained write)

fio --name=nvme --filename=/DATA/fio.test --size=10G --rw=write --bs=1M --iodepth=32 --numjobs=1 --direct=1 --runtime=30 --group_reporting

If these show well above 600MB/s, the NVMe + ZimaOS storage stack is fine.


Correct internal copy workflow (GUI vs CLI comparison)

If the goal is testing “internal copy speed”, compare GUI vs CLI using the same dataset:

Create test file

dd if=/dev/zero of=/DATA/testfile bs=1G count=10 oflag=direct status=progress

CLI internal copy test

mkdir -p /DATA/copytest
time cp /DATA/testfile /DATA/copytest/testfile.copy

Now copy the same file in the ZimaOS GUI file manager:

  • /DATA/testfile > /DATA/copytest/

If GUI is significantly slower than CLI + dd/fio, the limitation is strongly within the GUI internal copy workflow (not NVMe).


Troubleshooting checklist (disk vs CPU vs pipeline)

Install ttydBridge from the ZimaOS App Store so you can open two terminal tabs.

During the slow GUI copy, run:

Terminal 1:

iostat -xm 1

Terminal 2:

top -o %CPU

This will quickly show whether the bottleneck is:

  • disk saturation / queueing
  • single-thread CPU bound pipeline
  • background tasks competing for I/O

What to post if you need help (copy/paste template)

If you still believe internal copy is slow, post the following outputs:

Filesystem / mount type

mount | grep -E "/DATA|btrfs|ext4|zfs"

Raw NVMe performance

dd if=/dev/zero of=/DATA/testfile bs=1G count=10 oflag=direct status=progress
dd if=/DATA/testfile of=/dev/null bs=1G status=progress

fio benchmark

fio --name=nvme --filename=/DATA/fio.test --size=10G --rw=write --bs=1M --iodepth=32 --numjobs=1 --direct=1 --runtime=30 --group_reporting

Monitoring during slow GUI copy (two terminals)

iostat -xm 1
top -o %CPU

(Optional) SMB speed from Windows → NAS over 10GbE (e.g. ~950MB/s)


Recommended format for posting results

Please paste results like this:

Hardware: (CPU / RAM / NVMe model)
Pool type: (single / mirror / parity)
Filesystem: (btrfs / ext4 / zfs)
dd write: ___ GB/s
dd read: ___ GB/s
fio write: ___ MiB/s

During GUI copy:
iostat util: ___%
CPU/core usage: ___


Conclusion

Before concluding “ZimaOS isn’t made for NVMe”, validate performance with dd + fio.

In multiple cases, users reporting ~600MB/s internal GUI copy later confirmed:

  • NVMe dd/fio = 2–3GB/s
  • SMB over 10GbE = full speed
  • slowdown isolated to GUI internal copy method and/or small-file workloads
2 Likes