Making Your Time Machine for Linux

After intensive use of the time machine on poppies and a couple of situations when it really came in handy (there were options when I had to install the system from backups and when I had to roll back the Varant after problems), the idea arose of why this convenient system is not on Linux. After researching the question and questioning familiar Linux users, it turned out that:
1. you can just make such a system in a couple of minutes on your knees
2. strange, but somehow no one actually knows that it can be lifted so quickly.
3. our time machine for Linux will be with mahjong and geisha.

There was just an unnecessary appletv of the first generation from which it was decided to make a small server on hardeded linux for collecting logs and any auxiliary purposes. After installing hardened gentoo, the question arose of how to back it up so that the entire disk would go right away - so that in case of complete failure it would be possible to roll the backup to a new disk or to a completely new appletv (hmm, if I find one later of course), or selectively restore deleted or lost files simply referring to a certain date.

How does time machine work? Quite simply, the first system cast is simply copied as files with all the attributes in Backups.backupdb / hostname / YYYY-MM-dd-hhmmss, + the latest symlink is made, indicating the last cast. And already at the second nugget, the following happens: the file dates are compared and if the file has not changed, then instead of a new copy of the file, a hardlink is made to the file from the previous nugget. Then if there are no file changes, then the whole new cast will completely refer to the previous one. Nulls will differ only in new / modified / deleted files. As a result, you can simply delete any cast (a directory like YYYY-MM-dd-hhmmss) and nothing will break. The closest association is a smart pointer in c ++, for example: when the last link to a file (from all casts) disappears, it will be deleted from the disks.

A very convenient system from the point of view of viewing previous files, restoring them, deleting old backups, etc. Until the restoration of the entire system from the cast.

After researching all the Linux backup systems I know of, they were discarded as a little bit wrong. But there was a simple solution with rsync in one line.

rsync has a wonderful option --link-dest which just does all the above logic compared to previous snapshots and creating hardlinks. The -x option will exclude all mounted file systems like / proc / sys / dev, etc. --delete will delete files that are lost.

The whole time machine will be a single script like:
#! / Bin / sh
date = `date" +% Y-% m-% d-% H% M% S "`
SRC = /
DST = / mnt / backup-hdd / Backups.backupdb / atv
rsync -ax \
--delete \
--link-dest = .. / Latest \
$ SRC $ DST / Processing- $ date \
&& cd $ DST \
&& mv Processing- $ date $ date \
&& rm -f Latest \
&& ln -s $ date Latest
Copy the script to /etc/cron.hourly and voila we have backups like in time machine.
If our system crashes, we can boot from the USB flash drive, format the partition, and run the same rsync in the opposite direction, then reboot or chroot and the system works again.

We also put in /etc/cron.daily a simple script that will delete hourly impressions of the previous days leaving only 24 for the last day, then delete daily impressions for the previous months, etc. - like in apple time machine, just select your sequence.

We want to upload backups over the network to the backup server? Also please. We do auto authorization with ssh keys and silently change the script:
#! / Bin / sh
date = `date" +% Y-% m-% d-% H% M% S "`
SRC = /
DST = / mnt / backup- server-hdd / Backups.backupdb / atv
rsync -axz \
--delete \
--link-dest = .. / Latest \
$ SRC $ DST / Processing- $ date \
&& ssh backup- \
"Cd $ DST \
&& mv Processing- $ date $ date \
&& rm -f Latest"
&& ln -s $ date Latest "
We test the 3GB system on appletv compared with the last snapshot on the server in about five minutes - the server is remote - not in the local network, the connection was about 1.5MB / s. We added "-z" for compression. Not bad, but if you grow it can be done not with an hourly but with a wide range. It is necessary to indicate that rsync over the network does not seem to transfer "owner: group" files - as backup-user @ will use the user ... - you can simply dump all the file attributes into a text file and put it in a backup.
Now what Mac users don’t have
1. Flexible configuration with cron.
2. The option --exclude = PATTERN / - exclude-from = FILE allows you to specify much more flexible exclusion rules than just specify folders. For example, I am very unhappy that the time machine backs up everything * on my poppy .o files - the compilation of the project easily spreads into a couple of gigabytes of obj files, and I do not want to exclude the project folders from the backup at all.
3. It is easy to add additional logic to do database locks if necessary
4. It is also easy to add, in addition to the impression, a snapshot of only those files that have changed - it is easy to monitor later which files were changed. Since hardlink doesn’t take up much space.
5. Add now what you want, you still have Linux in your hands;)

Update: Perhaps I’ll add a list of requirements for the backup system that I had in my head when I looked at the available packages / systems:
1. Ease of use (rsync: just one command)
2. Accessibility (you will probably find rsync on any livecd)
3. Dependencies (if your disk has flown off, you booted from livecd, now you need to put packages with their dependencies somewhere?)
4. Lack of config files or their minimum (or now search for a config with your hands in the backup first?)