I was tempted by ZFS but it just felt like too much trust to put in something complex as a novice. Just reading about it led me to too many data-loss stories. I have had enough corporate IT teams tell me their RAID went down with total data-loss and their tape drive recovery isn't working to find a lot of virtue in "dumb-as-hell" infrastructure design!
Most of the information surrounding ZFS online is sadly noise, not signal. And most of that noise is "wow so enterprise storage infrastructure", which breaks down into a) Here's All The Complicated Things I'm Gonna Do And How I'm Gonna Do Them and b) I Broke It, Help.
So it seems dizzying and unstable, but the dizzying bit is just nerd ego and hype, and the instability is also just nerd ego meets inevitability. (Now I think about it, it's a bit reminiscent of the abandoned "ABC shall be the ultimate XYZ" software you sometimes find online, and scriptkiddie mentality.)
ZFS is just a reasonable, mostly solid, somewhat idiosyncratic filesystem with nice features. Most of the failure noise online is from people who have no idea what they're doing; there are curiously (good-suspiciously) few public reports of commercial-scale data loss et al.
I've taken to using it as the root filesystem on a couple of Debian boxes. The Debian install instructions are both overblown and also almost hilariously broken, but that's a devops/sysadmin documentation failure, and not ZFS' fault. I spent a few years on Slackware, and have found the ZFS setup experience reminds me of that rock-solid-meets-flying-by-the-seat-of-ones-pants mentality.
Spend an hour setting up a ZFS on old hardware. Play with snapshots and manually delete test documents until you are confident on how the feature works.
Highly recommended! One can even learn about, and experiment with, ZFS using files sitting on a disk by setting the files up as loopback devices. For example:
It may be simpler to play with ZFS on VMs. The VM-disks will be small and inefficient, but "good enough" to prove whether or not the system works.
Not that I've tried any of this before. I happened to have an old computer lying around with ~4 small hard drives that I messed with to learn the ins-and-outs of ZFS the first time I used it. (Shuck hard drives, pull them out of laptops, etc. etc. Its not really that hard to get 4 hard drives to play with, especially if you ask your friends for ancient laptops with 200gb drives or something)
ZFS is very robust, especially in a mirrored setup. Compared to your sync setup, with ZFS you could have snapshots, which would let you rewind back to earlier versions of files. This is a good mitigation for ransom ware attacks. You also get bit-rot protection, where you know what you read is what you wrote. Without this protection, if a file gets corrupted on your primary drive, this corrupted file will get synced to your backup drive. ZFS helps with both of these and a mirrored setup would give you a boost for read IO speeds.
Of the many ways storage can get complex, I've found ZFS to be one of the most straightforward.
Yeah I've run a home fileserver at home before and it's just not fun continually worrying about updates and having to maintain a bunch of monitoring if some cron job broke or something.
They're not exactly cheap, but I'm another one who went the Synology route. They slap a user-friendly UI ontop of linux mdraid and BTRFS, so you can RAID together a few drives, set up automatic BTRFS snapshots with retention rules (keep 12 hourly snapshots, 7 daily, 3 monthly, etc), set up disk scrubbing schedules etc all just by clicking around.