How I do backups
Backups are important! I don't just mean for production environments either - you should back up your laptop regularly and in an automated fashion.
I've started doing regular backups of my machines after suffering 2 consecutive drive failures with one of my laptops. After something like that (which BTW never happened since XD) you kind of realize that backups are not just a nice thing to have, but a hard requirement. In this blog post I'll outline how I've set up my on-site backup solution.
When two of my hard drives failed I fortunately didn't loose a lot of data. I used to rsync some of my stuff between 2 machines (but not very often), my work stuff was pushed to remote git repos, I used various cloud storages for some stuff etc. Still, there were some things that I lost with the drives and it was annoying and time consuming to get up and running again. This should not be the case.
After the second incident I decided I would do something about it and set up an on-site backup solution. Why on-site? Well, performance mostly, but I also didn't trust any of the cloud based solutions (I use tarsnap now but that's another blog post :)). So - I bought a NAS1 and went looking for a backup tool.
NOTE: I've been sitting on this blog post for 3 years. So bear in mind that some things might have changed since then which I didn't bother to get up to date on. However, I'm still happily using the method described in this post.
While searching the web for potential candidates I put together a couple of requirements:
- The tool needs to be OSS
- Backups needs to be encrypted
- Backups need to be incremental
- Backups need to reside on a local NAS I have running in my homelab
- The backups needs to happen regularly without needing my input from a human
NOTE: Since I only run Linux machines I didn't care if the tool was cross platform.
I quickly found that borg-backup would be a great fit for my use case. It's a very widely known (and used) tool that seemed like it checked all of the boxes above (except the last one...but we'll get to that).
Borg was fairly easy to set up and run. The documentation is excellent. I did fiddle a bit with the
ignorelist (so that it would not backup useless files like caches etc) but after that it basically just worked.
Borg is designed as a client/server system. This means that you have to install the server component on your storage device (where the backups are going) so that the client (ie. your laptop) can communicate with it. You can read more about it here. At the time my NAS didn't have an official/supported way of installing borg2 so I opted for the client-only approach where I just mount the backup folder via NFS and let the client do it's thing. NFS is pretty battle tested and I didn't see this as a huge downside. Although performance does suffer a bit, it has generally worked well so far.
To run it automatically I wrapped the entire thing in a shell script and set it to run daily at a certain time - and the simplest possible way was to use cron to do that.
This is the lowest possible amount of effort I needed to make to get an "out of sight out of mind" backup system working without me having to babysit it.
Basically borg was checking all the boxes except the last one. How do I run backups automatically without needing my input or attention.
There were 3 main problems with the above approach:
- What if my laptop was not turned on during the time of day that the backup is scheduled
The simplest possible way to solve this is to try and re-run the backup scripts every so often - between the hours when the laptop is most likely to be running. It was important that the backups didn't happen in the middle of the day because they can sometimes be a bit CPU intensive if I'm doing X other things (encryption is client side).
I decided to just re-run the script every hour between hours 19h and 23h. I figured this would give the backups enough chance to run (given my laptop usage patterns).
30 19-23 * * * /path/to/my_backups.sh
Side note: check out crontab.guru for a more detailed explanation of the cron syntax.
This solution required that I change my shell script in such a way that it first checks if a backup for the day has already been done. It not, then run the backup, otherwise just exit. Doesn't seem that big of a deal right?
Well, even small changes in shell scripts can make them more complex and less maintainable.
In retrospect a better solution would most likely be if I had just used anacron which is more suited for systems that don't run 24/7.
- What if I turn off my laptop in the middle of a backup?
I never really investigated too much what would happen here. I assume if it's a clean shutdown borg would just cleanly abort and do a backup in the next backup window. Unclean shutdowns are another thing. Usually you end up with a lock file that didn't get cleaned up and no future backups can run until you manually sort it out.
I chose to ignore this and just make sure that the human (me) is always notified when a backup starts and when it finished. On Linux this is pretty easy with
libnotify. I use notify-send to send the start and finish notifications:
notify-send -u normal "backup started..." notify-send -u normal "backup finished!"
This would catch my attention and I would know not to power off the machine if the backups haven't finished yet. If I somehow forgot that I dismissed the notification my window manager makes it really easy to cycle through old notifications and verify if the backup was done. It's not perfect but it works for me.
- What if I'm not connected to my LAN and therefore cannot mount my NFS drive?
This was the biggest issue with the above approach. I could just let the cronjob fail as it would be picked up later anyway - but I didn't like that.
I briefly thought about extending my shell script wrapper with the ability to check if I'm connected to my home WiFi and only then start the backup. This in turn made the shell script even more complex and hard to maintain. I don't know about you but when I have a lot of conditional logic in shell scripts things start to fall apart really quickly.
I briefly considered rewriting it in Python - and that would have been fine - but I was writing most of my tooling in Haskell at that time.
Now, instead of a brittle shell script, I had a proper binary I can configure with a config file (yes it's YAML). I still use cron to run it though. I was thinking of making a proper daemon out of it but decided against it. It was too complicated for not much gain.
And there you have it. Backups for folks like me who don't yet use ZFS! :D I jest but, I'll likely be migrating my machines to ZFS starting next year so I'm not sure how much I'll need these tools but we'll see.
I mentioned at the beginning that I also use Tarsnap for off-site backups (as well as for my servers). Why have both? Redundancy is always a good thing - if my local backups get corrupted I can always turn to my cloud backups or vice versa. Also, you never really know if your backups work if you don't test restoring from a backup often. I still don't do that part very often so I figured it's best to have two systems just in case.
It's just a 2 bay Synology DS215+ device. ↩
Synology runs it's own customized linux based OS. It's mostly great but it hides away Linux from you. As a consequence you can't just
apt-get installwhatever you want but need to go through their own "application store". A friend recently mentioned that borg is available under third party packages. That's still not officially supported and I didn't want to break anything on my NAS. It's an appliance and not a general purpose Linux machine so I treat it as such. ↩
I had just watched the new Blade Runner remake so that's where the pun comes from. :D ↩