Hey fellow Selfhosters! I need some help, I think, and searching isn’t yielding what I’m hoping for.

I recently built a new NAS for my network with 4x 18TB drives in a ZFS raidz1 pool. I previously have been using an external USB 12TB harddrive attached to a different machine.

I’ve been attempting to use rsync to get the 12TB drive copied over to the new pool and things go great for the first 30-45 minutes. At that point, the current copy speed diminishes and 4 current files in progress sit at 100% done. Eventually, I’ve had to reboot the machine, because the zpool doesn’t appear accessible any longer. After reboot, the pool appears fine, no faults, and I can resume rsync for a while.

EDIT: Of note, the rsync process seems to stall and I can’t get it to respect SIGINT or Ctrl+C. I can SSH in separately and running zpool status hangs with no output.

While the workaround seems to be partially successful, the point of using rsync is to make it fairly hands-free and it’s been a week long process to copy the 3TB that I have now. I don’t think my zpool should be disappearing like that! Makes me nervous about the long-term viability. I don’t think I’m ready to drop down on Unraid.

rsync is being initiated from the NAS to copy from the old server, am I better off “pushing” than “pulling”? I can’t imagine it’d make much difference.

Could my drives be bad? How could I tell? They’re attached to a 10 port SATA card, could that be defective? How would I tell?

Thanks for any help! I’ve dabbled in linux for a long time, but I’m far from proficient, so I don’t really know the intricacies of dmesg et al.

  • isles@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Thank you! I ended up connecting them directly to the main board and had the same result with rsync, eventually the zpool becomes inaccessible until reboot (ofc there may be other ways to recover it without reboot).