I have about 8TB of storage that is currently only replicated through a raid array. I occasionally sync that to another USB drive and leave that in a fireproof safe (same location).

I’d really like to do an offsite backup, but I only have 10Mbps upload. We are literally talking months to do a full backup.

How do others handle situations like this?

  • SheeEttin@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    1 year ago

    Reconsider how much of that 8tb really needs to be backed up. Thousands of pictures of your cat aren’t really going to be missed, and your Linux ISOs can be redownloaded.

    • nix98@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      1 year ago

      They are pictures of my dog and YES THEY DO! :) I mean, it is 25 years of my computing history there…

  • nomecks@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    Pre-seed your backup location and then hope that your change rate is small enough to fit into 10 mb. For example, if you’re using AWS you can get a snowball to load data into S3.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I don’t do offsite backup, but if your backup system supports it, you could physically take your backup drive to some location with a lot of bandwidth to toss the initial full backup up there.

    • nix98@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Yeah, that is what I am thinking. I am using duplicity for backups, so I can probably back up to a hard-drive, take that to work, sync it to my backup provider, then just do incremental backups from then on.

      However, I think duplicity really wants to do full backups every X months, so I’m not sure the best way to handle that.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        I don’t know whether it’s important to do a full backup for performance, but you’d need to do that if you wanted to remove old backups. It looks like the term for a full backup that reuses already pushed-over data is a “synthetic full” backup, and duplicity can’t do those – when it does a full, it pushes all the data over again.

        I have never used it, but Borg Backup does appear to support this, if you wanted an alternate that can do this.

        EDIT: However, Borg requires code that runs on the remote end, which may not be acceptable for some; you can’t just aim it at a fileserver. duplicity can do this.

        I have also never used it, but duplicati looks to my quick glance to be able to work without code running on the remote end and also to do synthetic full backups.

      • CmdrShepard@lemmy.one
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Another alternative is to setup a backup server at a willing friend/family members house so that you can physically take the drive and just upload any new changes later.

        • RvTV95XBeo@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Or, pick up 2 backup drives, keep one at a friend/relative’s house, then just swap them every time you visit.

          I keep a drive at my parent’s house in case of emergencies. Backup frequency is essentially every few months, but I also have the local portable drive with real-time sync I can snag on my way out.

  • TechNerdWizard42@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    Just start the backup and wait 3 months. That’s not that bad. Are you losing access to the data soon? If not, just let your auto sync take care of it piece by piece.

    I had an emergency situation where I needed to move the data and upload was 25mbps. Stupid cable companies not understanding why people need symmetry. It would take approximately a year of continuous upload and I didn’t have that. So I use 4 Starlinks, aggregated them to the cloud server with a VPN and now I had 225Mbps to 425Mbps upload. It took about 3 weeks but all the data was moved.

    • shadowbert@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I’ve never really understood why, seemingly universally, symmetric (or at least non-anemic upload plans) are completely unaffordable compared to “normal” plans (assuming they’re available at all).

      It truly sucks for stuff like this.

      • myplacedk@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Seemingly. 🙂

        My ISP only has symmetric. The cheapest one they advertise costs about 10 Big Macs per month.

        I can’t speed test my connection as my wifi is the bottleneck. But the way our law is, they can’t really lie about speed. The “up to” trick was banned a long time ago.

      • TechNerdWizard42@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Not just unaffordable, just not available.

        That house where I needed to transfer data was in a neighbourhood only had copper phone lines from the 1960’s for DSL and then coax cable. The maximum possible was 400Mbps down and 25Mbps up. Over the years it was increased to 800Mbps down, still 25Mbps down. I paid over $1000 USD a month for that shit internet. Because the only alternative was 4Mbps to 8Mbps upload.

        This is a major metro area, 700k people. Starlink was a game changer. Not symmetric, but waaaay better.

        There’s only so much bandwidth on the cable line and they’ve spent ages marketing download speeds as the measure. If they go from 400/25 to 400/50, 99.9999% of people wouldn’t understand and wouldn’t pay extra. But make it 425/25 and people will buy. Bigger number more better.

  • Eskuero@lemmy.fromshado.ws
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Idk what’s offsite to you but if it’s a controlled place for you (like from a friend or family member) you could simply bring your device one day and do the first copy there.

    Otherwise maybe rsync a folder at a time.

  • Solar Bear@slrpnk.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I very recently started using borgbackup. I’m extremely impressed with how much it compressed the data before sending, and how well it detects changes and only sends the difference. I have not yet attempted a proper restore from backup, though.

    I have much less data I’m currently securing (~50gb) and much more uplink bandwidth (~115mbps) so my situation isn’t nearly as dire. But it was able to compress that down to less than 25gb before sending, and after the initial upload, the next week’s backup only required about 100mb of data transfer.

    If you can find a way to seed your data from a faster location, reduce the amount you need to back up, and/or break it up into multiple smaller transfers, this might be an effective solution for you.

    Borgbase’s highest plan has an upper limit of 8TB, which you would be brushing right up against, but Hetzner storage boxes go up to 20TB and officially support Borg.

    Outside of that, if you don’t expect the data to change often, you might be looking for some sort of cheap S3 storage from AWS or other similar large datacenter company. But you’ll still need to find a way to actually get them the data safely, and I’m not sure if they support differential uploads like Borg does.

  • 𝘋𝘪𝘳𝘬@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    How do others handle situations like this?

    A company I worked for had an external storage drive with the needed capacity stored in a safe deposit locker and every Friday someone drove to the bank, got the drive, drove back to the office, performed the backup and brought the drive back to the bank.

    • Midnight Wolf@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m planning on doing this, but every 6mo not 1w lol. My upload speed is abysmal as well and I only have absolutely critical things uploaded to off site storage, then I have a full backup locally and physically disconnect it until next backup (to guard against ransomware); am about to get a 20TB drive for this ‘catastrophic event’ backup plan, stored at the bank, just undecided which drive manufacturer to go with.

      I figure if the house explodes or something, max 6mo data loss is acceptable vs almost all data loss. And avoids this bandwidth issue.

  • PracticalParrot@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I’d love to know how to deal with this. Currently sitting on 12TB used. Decent upload but cloud storage for that much is expensive. I have 5 8TB HDDs, 2 of which act as redundancy in RAID6 config.

    One thought I had was convince a friend to setup the same, and dedicate half of each other’s storage as redundancy for the other person.

    • Lobotomie@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Usually hosting servers offer some sort of dedicated storage for longterm/backup solutions which is different to cloud storage. It’s also alot cheaper. For example hetzner storage box