Use rsnapshot (quite mature tool with little adjustments as of today: hourly, daily, weekly backups, to retain about 6 months of changes) and have critical data on xfs file systems, and so are my backups on xfs with one exception: one drive is ext4. Additional, I have an rsync script doing additional backups with rsync & sha1 of the same docs, just in the unlikely case rsnapshot fails. Also, I have it weekly (monthly might be sufficien) in my calendar to check if the backups work, just to exclude that the automation of jobs is somehow corrupted. I have a total of 5 (physically) different disks.
I also have btrfs of my system disk and some non-critical data with btrfs snapshots that are automated, but that is more for convenience to backup if something fails or breaks, not part of my critical backup strategy. I consider on the long term to replace the ext4 with something btrfs based, but not yet sure if/when to do this.
I also have some things, non-binary stuff, in daily-auto-updated git repos, which are then also rsync’ed to the backups, to have the git to retain former states, and rsync to put the data to somewhere else as elaborated above.
Backups are isolated from the user account, so that the user account cannot break them.
That avoids most single point of failures (tools, file systems, hardware/drives, automated-call-of-tools, accidental-user-deletion, capture/accidents on user account). Ignored/accepted risk: taking over of root account, or very special super kernel bugs 
Obviously, as you suggest, retaining some of that offline/detached is an option for you to mitigate also these risks → you can, like me when I take my laptop off the non-mobile backup equipment, mount file systems with nofail , and make the backup scripts dependent on the backup drives being mounted. That way, you do not need to do much, and if the system is booted with (some of) the devices attached, they will auto-mount and then auto-backup. Otherwise, not.
Implications:
- rsnapshot is super efficient and quick if no large files that change are contained, as inodes allow that nothing but file system meta data is “hardlinked” if a file does not change. But this can become a mess if large files that change are contained in rsnapshot backups: e.g., if just 1 bit of a 10 GB file changes, the whole file is re-backed up, with all 10 GB. This means 3 consequent back ups with each 1 bit changed of the 10 GB file, leads to the use of 40 GB in total.
- git should not be used on binary data or so. Rule of the thumb, if it is human readable, git is ok, if not, it can be slow and inefficient. As reliable and flexible it is, it is not the most efficient tool anyway, as it is originally not intended for backups. But my experience is that the more critical and important something is (or can become), the smaller it usually is (e.g., documents).
- be careful if you combine the two solutions: if git makes its changes daily, its files (increasing in size) could cause regular backups of rsnapshots, each rsnapshot backup containing all modified git files. Therefore, in all circumstances, you need to ensure the
.git folder of the initialized backup repo is excluded from rsnapshot backups (which does not mean that you cannot use rsnapshot for git repo backups → this issue is about git-based backup solutions). E.g., through putting .git to another directory and link it to the backup repo. Then only the link is backed up, which obviously doesn’t change.
Of some stuff I also have a m-disks, but retaining them regularly is a little problematic, so that is more for data both not changing over long periods, and not occurring often. But that might be an option for you too, though I would not solely rely on that (again, it can act as single point of failure).
All of that is available by default on Fedora repositories and maintained.
Maybe some of that is useful, or may serve as incentive to rationalize some risks or so 
Supplement: if applicable, don’t forget to embed/align your backup strategy into/with your encryption/access strategy 