ZimaCube Pro - Problem / 7. Bay M.2 crashes after minutes

I received my new ZimaCube Pro with 64 GB RAM and getting startet with it.

In this case, i installed and tested different M.2 disks individually in the 7th bay:

  1. Samsung 980 Pro - 1TB
  2. Samsung 970 EcoPlus - 1 TB
  3. KIOXIA (Toshiba) - 1TB
  4. KIOXIA (Toshiba) - 512MB

Initially, the M.2 disks are displayed correctly in the BIOS and ZImaOS and are usable.

After a few minutes of operation, however, the M.2 disks are no longer accessible and it seems that the board/controller has crashed completely. The HDD (Seagte Ironwolf 8TB for tests) installed in the other bay are also no longer accessible. Without the M.2 disk(s) in the 7th bay, the HDD run for several hours without any problems.

It is interesting in this context that if the system crashes, the lighting in the bays is no longer on. The power button also no longer has any lighting.

After crash, a normal restart/power-off is not enough. The M.2 disk are then no longer displayed in the BIOS. Only the drive with the ZimaOS (Kingston) is visible. Only after the ZimaCube is completely disconnected from the power supply are the M.2 back and the lighting works again!

This problem can be reproduced as soon as I want to install and boot a different OS on the M.2 disk. With Truenas Scale/Core, the drive disappears during installation and an “I/O Device Error” with the controller (pcieport error) is displayed. The same thing happens during the Proxmox installation.

Does anyone have similar problems? Can I solve it myself, or is it more likely a hardware defect?

Thanks
Stephan

2 Likes

I encountered a similar problem when attempting to create a RAID 1 configuration with my two M.2 SSDs in Bay 7. However, the process failed with “Fail - Exit Status 1,” and since then, the SSDs are no longer visible.

P.S. M.2 SSD’s are crucial P3 Plus 4 TB

1 Like

Did you install the TrueNAS system on the host’s SSD or the 7th bay’s SSD?
At present, I think the problem may be caused by the failure of the 7th bay. I suggest you contact support@icewhale.org for further help.

I installed TrueNAS and Proxmox on the NVMe disk in the 7th bay. I left the host SSD (Kingston) untouched as a backup.

Tomorrow I’ll install a PCIE NVMe adapter and see how stable it is.

I do not recommend installing the system on the seventh bay First, the seventh bay has a different channel than the SSD slot on motherboard, and it passes through two additional signal conversion chips to the CPU. Second, the temperature of the seventh bay is easy to overheat than the motherboard. The heat comes from the SSD itself and the conversion chip behind the seventh bay. An overheated SSD will affect its speed and even risk stopping working. BTW, the seventh bay does not support hot swapping,I guess that’s why you need to reboot the device for it to recognize the ssd

Little Update to my Case - I have updated due to a fresh installation to ZimaOS 1.2.3 and still get the same Error when creating the Raid - afterward the SSDs are no longer showing up in the ZimaOS Storage Manager

Ok, i have installed Proxmox on the Host-SSD. It’s works fine with the Kingston-SSD.

But the additonal SSD (in this testcase the Samsung 970 EcoPlus) in the 7th bay still crashes again.

After a few seconds/minutes after boot the drive dispose and only the Host-SSD are visible and accessible. This can be accelerated as soon as any activity starts on the SSD in the 7th bay (ex. wipe out currrent filesystem/partitions).

It’s the same behavior as on ZimaOS.

I don’t hot-swap the SSD. Normal shutdown or reboot will not help if the system/7th bay crashes. It only helps to disconnect the ZimaCube Pro from the power supply after shutdown to see the SSD in 7th bay again.

I have exactly the same problem with my Zimacube Pro. As soon as I start configuring the SSD’s in TrueNAS scale, the OS crashes and starts to reboot. Removing the bay after that is the only solution.

I have installed Proxmox on the host SSD, configured ZFS RAIDZ for the 7th bay SSD’s. So far so good.