Prototype ZimaCube Pro v.2 Mad Scientist Review and Testing

GOOOOOOD morning people, So today i just wanted to share with you the Prototype ZimaCube Pro i have recieved in for testing and validation.

Its been a mad ride but finally feel like i’m at a point that i can share some useful information with the community :heart_on_fire:

So First up a big shout out to the team at IceWhale who have made this happen and to Western Digital for providing the disks used in these tests.

Of course some want to see what’s coming so here is my first look and review of the Prototype ZimaCube Pro (please remember that things may change between this prototype and the release version…)

And of course unboxing of the disks from WD:

And some quick fan noise tests:

Full Speed

Auto Speed:

Anything specific you would like to see, let me know :slight_smile:

Of course, Everyone loves pictures!!!

The Cube arrives and so did my disks!!!
image
image

Prototype Pro Left, N100 Right
image

Prototype Pro bottom, N100 top
image

image
image
image
image
image
image
image

Use of m.2 slot for 10gbe port on rear of case
image
image

Quad NVME card can be removed from the sled to allow direct connection of U.2 Drive
image
image

LMK if you want to see anything else specific!!
I will be doing a complete teardown video soon and will link that here…

Speed / performance testing as promised -
Obviously there are other situations but i tried to cover the main ones:

***** I have identified some issues with the EDP connection leading to slow disk speeds, I will update the results as soon as I have this issue resolved *****

The following drives have been used for the disk testing:
4x 1tb WD Black SN770 m.2 NVME
6x 8tb HGST UltraStar HE SATA
1x 3.8tb HGST SN200 U.2 NVME
Onboard NVME supplied by IW.

Single Drive Tests:

Mobo NVME:
ZimaCubePro_v2_stock_NVME_IOPS
ZimaCubePro_v2_stock_NVME_speed

Quad NVME Sled with one disk:
ZimaCubePro_v2_single_NVMEQuadSled_IOPS
ZimaCubePro_v2_single_NVMEQuadSled_speed

Single HDD:
ZimaCubePro_v2_HDD_IOPS
ZimaCubePro_v2_HDD_speed

Single U.2:
ZimaCubePro_v2_U2_IOPS

ZimaCubePro_v2_U2_speed
.
.
.
.
.

Multi Disk Tests:

Quad NVME Sled RAID 0:
ZimaCubePro_v2_QuadSled_RAID0_speed
ZimaCubePro_v2_QuadSled_RAID0_IOPS

Quad NVME Sled RAID 5:
ZimaCubePro_v2_QuadSled_RAID5_IOPS
ZimaCubePro_v2_QuadSled_RAID5_speed

HDD RAID 0:
ZimaCubePro_v2_HDD_RAID0_IOPS
ZimaCubePro_v2_HDD_RAID0_speed

HDD RAID 5:
ZimaCubePro_v2_HDD_RAID5_IOPS
ZimaCubePro_v2_HDD_RAID5_speed

i5 Core Allocation
Screenshot from 2024-06-07 15-55-01

10 min full saturation stress on all cores / threads Stock Cooler

10 min full saturation stress on all cores / threads Silverstone Cooler

10 min full saturation stress on all cores / threads Silverstone Cooler + Noctua Fan

Geekbench 6 results:
Geekbench1png
Geekbench2
Geekbench3png

Geekbench 6 ML results:
Geekbench6ML1


Geekbench6ML3

PassMark PerformanceTest Linux (11.0.1002)
PassMark PT Linux

After feedback from the community, I have added some Audio checks using a DB Meter on my phone. This is not super scientific. it was a free app on my phone. Audio taken approx 1ft away from ZimaCube Pro on my desk.

I used the stock cooler that came with my Prototype ZimaCube (which is the same blower style fitted on the release N100)

I then swapped out to a SilverStone SST-AR09-115XS
I also purchased a 12v - Noctua NF-A6x25

Abient volume


m

Stock cooler idle


Stock cooler max

Silverstone stock idle

Silverstone stock max

Silverstone Noctua idle

Silverstone Noctua max

How about an lspci?

Does the sled share PCIe lanes with the bays through a PCIe switch? Benchmark kind of looks like the sled is getting 2x3.0 or 1x4.0 so what happens if you test a spinner RAID in the SATA bays and an NVMe RAID in the 7th bay simultaneously?

Thanks, i will get onto that tomorrow and see what i get out :slight_smile: The Sata drives and 7th bay all connect through the same cable. but will test it.

Thanks. It seems very clever and “clever” almost always comes with an unexpected cost

I’m regularly getting 2 GB/s from a single NVMe (Gen 4.0x4) ZFS pool over 2x10 Gbe SMB MultiChannel doing large file copies using the Finder to and from from my Mac Studio and preliminary testing shows about 20% higher using Thunderbolt Networking so that’s my current performance benchmark. The NAS is a Ugreen NAsync 6800 running the current release of TrueNAS Scale (Dragonfish).

The numbers I get using disk benchmarking programs I’ve tried are problematic I think because TrueNAS caching and compression so I’m reduced to real-world transfer speeds (which have their own set of problems).