My wild guess would be that they hold all data erasure encoded and run regular scrubs. So that any blocks with bit flips would be caught and corrected.
I know some drives do regular scrubs, but I don't think it's all that common. Mostly they monitor error rates when fulfilling read requests from the host, and use that to decide when data needs to be refreshed. If you write enough new data to the drive, eventually wear leveling will mean all of the old data you haven't modified has been moved and thus refreshed as a side effect. So the drive only needs to do background scrubs if it gets a very WORM-like workload but also leaves large portions of the data entirely untouched.
NAND flash data retention is related to how worn-out the flash is, in terms of program/erase cycles. A drive that's at the end of its rated write endurance is still expected to be able to retain data for one year (consumer) or three months (enterprise). Flash that isn't significantly worn out has much longer data retention.
Yes. The combination of erasure coding plus regular scrubbing is already the standard in the largest proprietary storage systems. As it happens, I worked on exactly the scrubbing piece ("anti entropy") for such a system at Facebook. There was a lot of analysis of data-loss probabilities based on encodings, placement across power/network domains, scrub rates, repair rates, etc. Since there are also performance and resource-use implications behind many of these, it's actually a very complex balancing act. That's why the knowledge of these second- or third-order problems and their solutions is slowly filtering down to storage systems you can deploy yourself.
I'm not quite sure what you mean there. Your software certainly should do regular scrubs, but there are a lot of storage systems out there that don't do this. And there most definitely are SSDs that do their own scrub akin to what host software can do.