Basically title. I’m in the process of setting up a proper backup for my configured containers on Unraid and I’m wondering how often I should run my backup script. Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent? Whats your schedule and do you strictly backup your appdata (container configs), or is there other data you include in your backups?
Backups???
Raid is a backup.
That is what the B in RAID stands for.
Just like the “s” in IoT stands for “security”
🤣
What’s the second B stand for?
Beets.
Or bears.
Or buttsex.
It’s context dependent, like “cool”.
cool
If Raid is backup, then Unraid is?
I do not as I cannot afford the extra storage required to do so.
Proxmox servers are mirrored zpools, not that RAID is a backup. Replication between Proxmox servers every 15 minutes for HA guests, hourly for less critical guests. Full backups with PBS at 5AM and 7PM, 2 sets apiece with one set that goes off site and is rotated weekly. Differential replication every day to zfs.rent. I keep 30 dailies, 12 weeklys, 24 monthly and infinite annuals.
Periodic test restores of all backups at various granularities at least monthly or whenever I’m bored or fuck something up.
Yes, former sysadmin.
This is very similar to how I run mine, except that I use Ceph instead of ZFS. Nightly backups of the CephFS data with Duplicati, followed by staggered nightly backups for all VMs and containers to a PBS VM on a the NAS. File backups from unraid get sent up to CrashPlan.
Slightly fewer retention points to cut down on overall storage, and a similar test pattern.
Yes, current sysadmin.
I would like to play with ceph but I don’t have a lot of spare equipment anymore, and I understand ZFS pretty well, and trust it. Maybe the next cluster upgrade if I ever do another one.
And I have an almost unhealthy paranoia after see so many shitshows in my career, so having a pile of copies just helps me sleep at night. The day I have to delve into the last layer is the day I build another layer, but that hasn’t happened recently. PBS dedup is pretty damn good so it’s not much extra to keep a lot of copies.
I’m always backing up with SyncThing in realtime, but every week I do an off-site type of tarball backup that isn’t within the SyncThing setup.
I use Duplicati for my backups, and have backup retention set up like this:
Save one backup each day for the past week, then save one each week for the past month, then save one each month for the past year.
That way I have granual backups for anything recent, and the further back in the past you go the less frequent the backups are to save space
Every hour, automatically
Never on my Laptop, because I’m too lazy to create a mechanism that detects when it’s possible.
I just tell it to back up my laptops every hour anyway. If it’s not on, it just doesn’t happen, but it’s generally on enough to capture what I need.
Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent?
Only you can answer this. How many days of data are you prepared to lose? What are the downsides of running your backup scripts more frequently?
rsync from ZFS to an off-site unraid every 24 hours 5 times a week. on the sixth day it does a checksum based rsync which obviously means more stress so only do it once a week. the seventh day is reserved for ZFS scrubbing every two weeks.
Backup all of my proxmox-LXCs/VMs to a proxmox backup server every night + sync these backups to another pbs in another town. A second proxmox backup every noon to my nas. (i know, 3-2-1 rule is not reached…)
I have
- Unraid back up it’s USB
- Unraid appears gets backed up weekly by a community applications (CA app backup) and I use rclone to back it up to an old box account (100GB for life…) I did have it encrypted but seems I need to fix that…
- Parity drive on my Unraid (8TB)
- I am trying to understand how to use Rclone to back up my photos to Proton Drive so that’s next.
Music and media is not too important yet but I would love some insight
If you haven’t tested your backups, you ain’t got a backup.
Local zfs snap every 5 mins.
Borg backups everything hour to 3 different locations.
I’ve blown away docker folders of config files a few times by accident. So far I’ve only had to dip into the zfs snaps to bring them back.
Try ZFS send if you have ZFS on the other side. It’s insane. No file IO, just snap and time for the network transfer of the delta.
I would but the other side isn’t zfs so I went with borg instead
I classify the data according to its importance (gold, silver, bronze, ephemeral). The regularity of the zfs snapshots (15 minutes to several hours) and their retention time (days to years) on the server depends on this. I then send the more important data that I cannot restore or can only restore with great effort (gold and silver) to another server once a day. For bronze, the zfs snapshots and a few days of storage time on the server are enough for me, as it is usually data that I can restore (build artifacts or similar) or is simply not that important. Ephemeral is for unimportant data such as caches or pipelines.
No backup for my media. Only redundacy.
For my nextcloud data, anytime i made major changes.
Assuming it is on: Daily