Even so, and even with tried and true methods I still suggest copying some random-files to a temp folder and testing a mock backup operation on them. It's helpful since it makes sure that you have everything such ordering right, that the command is sane, that you understand the arguments and what it does looks like what you expect them to do. It's kindof like trust building exercises. Seeing is believing and once you see a command work (importantly as expected) a few times, then you can start to trust it a bit more and especially your understanding of its use. To verify that a copy has succeeded you'll need to use "-c" on a second run. Size and timestamp after a copy may match even if the data didn't actually get written correctly, both cases would be considered a difference with the file with -c used. A "--dry-run" is like a mock-run or a no-changes mode, yet a dry-run can still perform the file comparisons [it'll read, but not write or delete]. The difference is that if you specify --dry-run and it was found that files did not match, then beyond the log entries (and console output), no second transfer actually happens. (eg, any difference detected will not be corrected) So, let's assume that most of the time the verify is going to find that nothing was changed (given disks are pretty-reliable). Most of the time if you were just to provide "-c" as an argument without --dry-run, it's actually going to be the same as if you were to have also used --dry-run, in that the files will match and nothing is actually changed. (thank god, since we certainly would hope our storage is pretty reliable) --Anyway, the whole idea is despite that most of the time nothing is going wrong, if you check each time then you at least know that it worked (for-sure). Even if the first thousand times we copy files everything goes just peachy, it's all about finding that 1001-th that failed, and knowing preemptively so that we can do it again. (getting our mulligan chance) Thankfully you got lots of nice hashes in those 24TB of files, since that's probably thousands and thousands of them. If you were to generate one hash over a single massive multi-terabyte file (spanning all 24TB), yep the odds of a collision would be statistically significantly higher. (it'd need to be split up probably and done in chunks, or both files read at the same time and compared byte by byte during the read) My bet is that "Windows Explorer" and not TeraCopy did this per generating its one-time thumbnail cache (preview). --There's probably not much you can do about the last-access timestamp here, since it was truly accessed to generate it. Yep, should be set for both generation and comparison. --Far as on saving hashes after a run, this is up to you if you think that you're going to want to re-run a comparison at a later time. Of course hashes can always be "manually" saved even if this isn't checked and at any-time after a transfer completes. (so long as the transaction is still shown in TeraCopy's log) To keep both (in the destination) will require renaming one of the two files, since there's a name collision. You're right in that you definitely don't want to skip, overwrite, replace, or anything like that. Each of those choices that you're looking at are supposed to be different ways to deal with the renaming -- like for the older of the two files, the copied (source), or the target (destination). Since you're performing a backup, you'll probably want to select something like "Rename all target files". All of the choices will only rename one on a collision and otherwise have no effect if just copying a new file that doesn't already exist (where no rename would be needed). In all cases you'll find a (2) [or a (3) and so on] tacked on the end of one of the files, yep. Personally I'd say that it should be the prior file in the backup beforehand, which is why I'd select to rename the target specifically. I wouldn't use the "older" choice, as it's possible that you might have restored an older-backup to the source (intentionally) and not edited it this time around (eg, undoing changes). --The behavior of "keep both" I suspect should be the same (as you found it to), the big button is probably just to have clear visual-buttons without reading through fine-print (for ease of use). It's a bit hard to think of more than those three methods of ways to rename one of the two files in the naming-collision. (I doubt there's a good fourth) Paranoia is the right attitude to have when you go about backups, since double checking and pre-testing is the only way to prevent stupid mistakes. It really does work to just re-read everything a few times before you hit enter, and to check again afterwards and read all logs. Always better to preempt problems than deal with them head on later. The great thing PC user, and especially a paranoid one, is that you can get an overkill (equally paranoid setup in every aspect) filesystem, software / tools, OS, memory, drives, etc to mirror your concern. I strongly suspect that everyone that works on these things (per their creation) is in the same mindset as us, literally LOL. But yeah, going overboard with redundancy helps reduce it at least a bit.