Source system: Xeon E3-1275, 64GB RAM, 2x SSD 480GB (for git dataset and local target), 2x4TB disks 7.2krpm (for bigger dataset), using XFS, running AlmaLinux 8.6
Target system: AMD Turion(tm) II Neo N54L Dual-Core Processor (yes, this is old), 6GB RAM, 2x4TB WD RE disks 7.2krpm, using ZFS 2.1.5, running AlmaLinux 8.6
Linux kernel sources, initial git checkout v5.19, then changed to v5.18, 4.18 and finally v3.10 for the last run.
Initial git directory totals 4.1GB, for 5039 directories and 76951 files. Using env GZIP=-9 tar cvzf kernel.tar.gz /opt/backup_test/linux
produced a 2.8GB file. Again, using "best" compression with tar cf - /opt/backup_test/linux | xz -9e -T4 -c - > kernel.tar.bz
produces a 2.6GB file, so there's probably big room for deduplication in the source files, even without running multiple consecutive backups on different points in time of the git repo.
Numbers:
Operation | bupstash 0.11.0 | borg 1.2.1 | borg_beta 2.0.0b1 | kopia 0.11.3 | restic 0.13.1 | restic_beta 0.13.1-dev | duplicacy 2.7.2 |
---|---|---|---|---|---|---|---|
backup 1st run | 9 | 38 | 54 | 9 | 22 | 24 | 30 |
backup 2nd run | 13 | 19 | 23 | 4 | 8 | 9 | 12 |
backup 3rd run | 8 | 25 | 37 | 7 | 16 | 17 | 21 |
backup 4th run | 6 | 18 | 26 | 6 | 13 | 13 | 16 |
restore | 3 | 15 | 17 | 6 | 7 | 9 | 17 |
size 1st run | 213208 | 257256 | 259600 | 259788 | 1229540 | 260488 | 360244 |
size 2nd run | 375680 | 338796 | 341488 | 341088 | 1563592 | 343036 | 480888 |
size 3rd run | 538724 | 527808 | 532256 | 529804 | 2256348 | 532512 | 723556 |
size 4th run | 655712 | 660896 | 665864 | 666408 | 2732740 | 669124 | 896312 |
Remarks:
- It seems that current stable restic version (without compression) uses huge amounts of disk space, hence the test with current restic beta that supports compression.
- kopia was the best allround performer on local backups when it comes to speed
- bupstash was the most space efficient tool (beats borg beta by about 1MB)
Remote repositories are SSH (+ binary) for bupstash and burp. Remote repositories are SFTP for kopia, restic and duplicacy.
Numbers:
Operation | bupstash 0.11.0 | borg 1.2.1 | borg_beta 2.0.0b1 | kopia 0.11.3 | restic 0.13.1 | restic_beta 0.13.1-dev | duplicacy 2.7.2 |
---|---|---|---|---|---|---|---|
backup 1st run | 22 | 44 | 63 | 101 | 764 | 86 | 116 |
backup 2nd run | 15 | 22 | 30 | 59 | 229 | 33 | 42 |
backup 3rd run | 16 | 32 | 47 | 61 | 473 | 76 | 76 |
backup 4th run | 13 | 25 | 35 | 68 | 332 | 55 | 53 |
restore | 172 | 251 | 256 | 749 | 1451 | 722 | 1238 |
size 1st run | 250098 | 257662 | 259710 | 268300 | 1256836 | 262960 | 378792 |
size 2nd run | 443119 | 339276 | 341836 | 352507 | 1607666 | 346000 | 505072 |
size 3rd run | 633315 | 528738 | 532970 | 547279 | 2312586 | 536675 | 756943 |
size 4th run | 770074 | 661848 | 666848 | 688184 | 2801249 | 674189 | 936291 |
Remarks:
- Very bad restore results can be observed across all backup solutions, we'll need to investigate this:
- Both links are monitored by dpinger, which shows no loss
- Target server, although being (really) old, has no observed bottlenecks (monitored, no iowait, disk usage nor cpu is skyrocketing)
- kopia, restic and duplicacy seem to not cope well SFTP, whereas borg and bupstash are advantaged since they run a ssh deamon on the target
- I have chosen to use SFTP to make sure ssh overhead is similar between all solutions
- It would be a good idea to setup kopia and restic HTTP servers and redo the remote repository tests
- Strangely, the repo sizes of bupstash and duplicacy are quite larger than local repos for the same data, probably because of some chunking algorithm that changes chuck sizes depending on transfer rate or so ? That could be discussed by the solution's developers.
Disclaimers:
- The script has run on a lab server that hold about 10VMs. I've made sure that CPU/MEM/DISK WAIT stayed the same between all backup tests, nevertheless, some deviances may have occured while measuring.
- Bandwidth between source and target is 1Gbit/s theoretically. Nevertheless, I've made a quick iperf3 test to make sure that bandwidth is available between both servers.
iperf3 -c targetfqdn
results
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 545 MBytes 457 Mbits/sec 23 sender
[ 5] 0.00-10.04 sec 544 MBytes 455 Mbits/sec receiver
iperf3 -c targetfqdn -R
results
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.04 sec 530 MBytes 443 Mbits/sec 446 sender
[ 5] 0.00-10.00 sec 526 MBytes 442 Mbits/sec receiver
- Deterministic results cannot be achieved since too much external parameters come in when running the benchmarks. Nevertheless, the systems are monitored, and the tests were done when no cpu/ram/io spikes where present, and no bandwidth problem was detected.