How much free space to leave on an external HDD used for storage? + Corruption multiple backups?

Discussion in 'SSD and HDD storage' started by 321Boom, Nov 21, 2017.

  1. A2Razor

    A2Razor Master Guru

    Messages:
    397
    Likes Received:
    15
    GPU:
    ASUS R9 Fury X
    Might be a problem was depending the flat owner, what your electrical box is like, and where power comes in to the house (mini-split usually will have its own dedicated breaker installed). Since its your parents' house and the breaker box won't be filled to capacity already, there's probably no consideration other than money. Noise level wise if you can go for a mini-split it's going to be the quietest and cool the best of all possible options.

    Costs where you live sound alot more reasonable than here for the install-job. If you can get it that low, then it's definitely worth it to go for a mini-split.
    ^ I'd go for the mini-split considering you plan to be there for many years more.

    -Out here I'm using PAC's to cool my computer rooms, yet I'm trying to be economical and not permanent installation. Another good use for PAC's is that they're mobile, if you ever have central air break, or any smaller unit break, you can roll in a PAC and use it temporary rather than having the room cook for a few days.

    InfiniBand is another option to Ethernet -- check out this blog post. It can be cheaper ... but configuration is also not as straight forward as Ethernet. You don't just plug it in and it works.

    Sometimes modern motherboards come with 10gbit Ethernet integrated, though these are at the more expensive models. I'd say that the only tradeoff is cost since you'd wind up having to buy extra hardware minimally for the computers you're using right now to get them on a faster network.

    Only the motherboard influences it, yep. You don't need to be that careful with the brand of ECC memory that you buy beyond just the unbuffered or buffered choices usually.

    EDIT: I take that back, just don't buy "BlackDiamond" memory per motherboard compatibility. Crucial, Samsung, Kingston, etc, are fine (any big well known brand name).
    ---Board makers usually post a QVL (qualified vendor list) in their documentation, though I've only had memory that's not on a QVL not-work with BlackDiamond.
     
    Last edited: Jan 12, 2018
  2. 321Boom

    321Boom Active Member

    Messages:
    81
    Likes Received:
    9
    GPU:
    GTX980 Ti
    Thanks for the information, I'll go for a mini-split then. Besides, I have experience with these in the past, and they really are awesome xD

    Haha that's a first, usually everything is really expensive here compared to other countries.

    That is a very good point about the PAC as a back up for the AC.

    Ok, I read the blog post, and I'm sorry to say that some of that stuff seems a bit out of my league to set up I reckon :/ Even the writer of the blog post said it took him a good few days to set it up, and he knows what he's doing, rather than me being a first time user :/ Let's try keeping it a bit simpler, as long as I won't feel any negative impacts with a standard 1gbit link (you know as long as gameplay recordings at 1080p 60fps, around 45,000 average bitrate, don't have trouble to be streamed from the server, don't want them to get interrupted with stopping to buffer while viewing them though, that would be mega annoying and disappointing).

    That's awesome that the only tradeoff is cost, thought it would be something like heat or noise.

    Noted about BlackDiamond, thanks for warning me about your negative experience :/

    A couple of questions from some more research I did:
    1. 'ZIL (ZFS Intent Log) can be moved to an SSD to drastically boost performance of your ZFS pools, but it is obviously write intensive, and requires a small (<32GB should be fine) but robust SSD if you want to do that. Again though, if this fails, the system simply goes back to writing the ZIL on the same drives that are already in the pool.'
    Will this benefit me for what I plan to be doing with the server? (remember it's basically just going to be my storage drive, and watching videos from it, nothing really more intensive than that)

    2. Interaction of TLER with the advanced ZFS filesystem
    'The ZFS filesystem was written to immediately write data to a sector that reports as bad or takes an excessively long time to read (such as non TLER drives); this will usually force an immediate sector remap on a weak sector in most drives.'
    https://en.wikipedia.org/wiki/Error...tion_of_TLER_with_the_advanced_ZFS_filesystem
    Is this something I should take note of, or something that is instantly set up? Will setting a lower TLER be better? If I I'm understanding correctly it's better to have TLER disabled if you have a RAID set up with 3 mirrors.
     
  3. A2Razor

    A2Razor Master Guru

    Messages:
    397
    Likes Received:
    15
    GPU:
    ASUS R9 Fury X
    Yep, cost only.

    Especially if you go the Ethernet way and not with some exotic setup (you're right that InfiniBand would require some days of study to get working, especially on BSD) -- the more expensive 5gbit and 10gbit NICs are pretty well plug and play compatible with FreeBSD out of the box. However then you need the NICs, potentially the switch, sometimes higher grade cable (or a repeater if a long run), etc, and that's a few hundred bucks easily.

    (of course as always read online first to check up on BSD compatibility, though in my experience the faster Ethernet NICs tend to just-work out of the box [plug them in and that's it for setup] )

    You could do that, but I don't think you're going to see much of an improvement realistically.

    The only situation that using an SSD is going to help for is very-intensive writing to the array. To hit a sustained speed where the buffer can't be commit to the array faster than you're transmitting data (over the network) would require that you upgraded to 10gbe and were reading from an SSD. Your array (due to having many striped disks in it to get up to 24+TB capacity) will already be pretty quick and capable of sustaining several hundred MB/s of writes (faster than 1gbit will provide the ability to send). The CPU in the NAS is also dedicated to ZFS' chore of committing data and it won't be fighting against anything else roles wise.

    The defaults I suspect won't cause you any problem -- you could reduce this if you want to, but again I don't think you'll need to do any tuning there realistically. The fact that you have three-way mirroring does provide protection already. Although early detection can give you a performance hit, I really doubt there'll be any noticeable hit for your use. That is, given this is for write-performance and the array is primarily intended for archival and long term storage of lots of data.
     
    Last edited: Jan 19, 2018 at 2:23 AM
  4. 321Boom

    321Boom Active Member

    Messages:
    81
    Likes Received:
    9
    GPU:
    GTX980 Ti
    Good to know in case, always good to have options, but will I really need more than a 1gbit NIC for what I intend to use the server for?

    Awesome, so no need for the SSD then.

    This 3 way mirroring really is a blessing huh with all the stuff it protects against. Yep you're right, not going to be doing too many intensive writes, main use is to use it as my storage drive and to watch/stream videos from it.

    You know how we talked about Robocopy before? Well I was going to give it a shot today (to avoid more needless non-ECC writes in future back ups till I get the server), and I read that there is a GUI version called Robocopy GUI (https://en.wikipedia.org/wiki/Robocopy#GUI), and also an updated version of the GUI called RichCopy. Any opinions on these GUI versions vs the standard command line version?

    RichCopy: [​IMG]
    Robocopy GUI:
    [​IMG]
    If they're as good as the command line version, what checkboxes to check? The Robocopy GUI has loads of copy options and filters which I really don't understand unfortunately :(. What I'm aiming for is to mirror my storage drive that's in the gaming rig onto 2 external drives as back ups (till I get the server). In about a month, when it comes round to the next routinely back up, I'll just want it to update/sync the 2 back ups only with the newly added files, and files that had any changes made to them.

    On a separate note but still related to Robocopy and file syncing software with the point I'm getting to: I don't understand why, if I simply open a Word document, (not edit it or save, just open and view) the folder where the Word doc was in gets it's 'date modified' field to the date I opened the Word doc, pic attached to understand what I mean, look at the folder called 'To watch':
    Before and After
    [​IMG]
    Important to note that I didn't save or edit anything, just opened it. Also what's even weirder is that opening an Excel file or Notepad doesn't change the modified date like the Word doc does... Anyway, point I'm getting at is, with Robocopy and Rsync, since these programs sync according to a time stamp and filesize, how will they behave to this change? Will they detect the folder as a new one and just resync it?
     

  5. A2Razor

    A2Razor Master Guru

    Messages:
    397
    Likes Received:
    15
    GPU:
    ASUS R9 Fury X
    Robocopy and rsync will only synchronize files within the folder that have changed, "even if" Windows Explorer shows a newer timestamp on the folder itself (access, modify, creation). So, in otherwords even if the folder shows a different timestamp (newer), when robocopy descends in to that folder and recursively looks at each file it'll make its decision totally independently on a file by file basis.

    -This is a gray area for me on the GUI's, since I haven't played with robocopy GUI's much. I can tell you the argument that you want though here: "/MIR" -- which stands for mirror.
    Take a look here on what each of those commands do. In jist, /MIR is /E coupled with /PURGE.

    /E: Copy subdirectories, including Empty ones. (aka, recursive copy)
    /PURGE: Delete dest files/dirs that no longer exist in source.

    The reason that you want /PURGE is because if you rename a file, /E will copy a new file under the new name, but the old won't be erased afterwards. Same goes if you move the file somewhere else, such as making folders for organization and shuffling items around. You'll wind up with alot of duplicates from /E which /PURGE will remove the duplicates of for you automatically.

    So, as an example here your command could be as simple as:
    robocopy <source> <target> /MIR

    ^ This doesn't have a retry count or delay specified. That /R and /W look fine in the GUI. I assume the GUI will give you a readout at the end, just like robocopy does from a command line, so after the copy finishes you just read the report and check if any transfers failed.


    You might also want to add the /MT argument ESPECIALLY if you have a ton of tiny files rather than big-files. This doesn't sound like it'll help much for the bulk of the content you're storing, so you'd have to experiment and test there. Multithreading helps where there's a choke on seek-time (eg, small files), yet can actually hurt sustained transfer for large files.

    [also as a caution, using /MT usually does increase fragmentation in the copy -- this can be a big deal like if you're cloning one disk to another from scratch]

    ^ Try with /MT:1 and also with a higher number if you do this more than once, and time them both. It really depends on the type of files you're copying which will do better. /MT:1 from a disk to an empty disk will create a copy that's almost fragmentation free, which is why alot of people do this for a two-birds with one stone type deal.

    In /MT:1, robocopy will behave similar to xcopy.

    This is an odd one. Has to be program specific though, likely a 'feature' of Word Documents to record the last access of the file or write some transaction log of any user that's touched it.
     
    Last edited: Jan 19, 2018 at 3:14 AM
  6. 321Boom

    321Boom Active Member

    Messages:
    81
    Likes Received:
    9
    GPU:
    GTX980 Ti
    Hmm, ok, so one thing concerns me about this, sorry if it's going to get confusing x_X: When I get the server, I'm planning on connecting my 4TB storage drive that's currently in my gaming rig to the server, and mirror everything from the 4TB drive using rsync, so the server will have all of the data that's in the 4TB drive, and I will start saving new stuff directly on the server from here onwards. So the server will become my primary storage location, and I'll be using the 4TB storage drive that was in the gaming rig as one of the 2 external drives method I already implement (I don't need it in my non-ECC gaming rig anymore since I'm saving everything to the server now). The data I rsynced from the 4TB drive will have a new creation date I assume (for example a file on the 4TB drive will be in 2016, which is the date I originally saved it, but 2018 when I copied it to the server). When I come to rsync again to the 4TB storage drive, this time the other way around though, so from the server to the 4TB drive (using it as an external drive in an enclosure), will it delete all of the data that was originally on the 4TB drive since the different creation date (this time seeing it as 2018 instead of 2016) and recopy it with the new creation date?. Reason I ask this is because I would like to keep the data that was originally saved on the 4TB drive since it's the original copy of the data (wasn't moved around with copies, in case of errors), rather than that data being copied to the server, then the server deleting it and recopying it.

    Yes the /MIR command sounds like what I need. I agree on the PURGE option, otherwise it wouldn't be a complete mirror without it, and end up with lots of duplicates and more confusion.

    So, I found this:
    'The /mir option is equivalent to the /e plus /purge options with one small difference in behavior:

    With the /e plus /purge options, if the destination directory exists, the destination directory security settings are not overwritten.

    With the /mir option, if the destination directory exists, the destination directory security settings are overwritten.'
    https://docs.microsoft.com/en-us/pr...ows-server-2012-R2-and-2012/cc733145(v=ws.11)

    It sounds like what you told me above, but what is it referring to with 'destination directory security settings'? Is it better that these security settings are overwritten using the /MIR command, or better that they're not by using the /e plus /purge options?

    That's good if it could be something that simple (I'm not really a fan of DOS and command prompts)

    Is that a good thing or bad that the command prompt won't have a retry count or delay specified?

    I know this will sound very amateur as a way to check compared to your methods, but what I do when taking back ups to the external drives is right click -> Properties on the folders I backed up and make sure the Size and number of Files and Folders are the same as the source. (On a sidenote just for knowledge, any idea why Size and Size on Disk is different in the Properties tab?)

    Will this /MT command make the files copy more securely, or is it just a matter of speed? I have a mix of files, ranging in small files (like anime art) to large files (gameplay videos).

    '[also as a caution, using /MT usually does increase fragmentation in the copy -- this can be a big deal like if you're cloning one disk to another from scratch]'
    So what would be the benefit apart from speed, for using the /MT command if it has this drawback?

    '^ Try with /MT:1 and also with a higher number if you do this more than once, and time them both. It really depends on the type of files you're copying which will do better. /MT:1 from a disk to an empty disk will create a copy that's almost fragmentation free, which is why alot of people do this for a two-birds with one stone type deal.'
    Only advantage of this is speed, and nothing else? I'm in no rush, especially if I know everything is getting copied more safely. So to set /MT:0, I just don't type in the /MT command, or do I have to specify /MT:0? This would be the safest route?

    'When specifying the /MT[:n] option to enable multithreaded copying, the /NP option to disable reporting of the progress percentage for files is ignored. By default the MT switch provides 8 threads. The n is the amount of threads you specify if you do not want to use the default.'
    https://en.wikipedia.org/wiki/Robocopy#Multithread_Copy/No_Progress_Bar
    Regarding this statement, does it mean /MT is enabled by default, as /MT:8?

    Sorry for all the questions, I know some of them seem very noobish and probably stupid, but I'm really not familiar with command prompt and the like, and pairing this with taking a back up of my data sends my ocd a bit on the fritz, knowing I'm using such a powerful tool on something so delicate. To me it feels like I'm attempting a glass sculpture with a jackhammer, as you told me, robocopy and rsync are very powerful tools, especially with their destructive capability, and with great power comes great responsibility.

    Indeed haha, had me running my head in circles for a while till I figured out what was going on. Seeing updated folder dates when I know I didn't save anything new lol.

    Thanks again for all your help, and patience. It's very nice of you taking the time for someone you never met.
     
  7. A2Razor

    A2Razor Master Guru

    Messages:
    397
    Likes Received:
    15
    GPU:
    ASUS R9 Fury X
    Fortunately rsync and robocopy have the ability to preserve timestamps in the transfer, so this won't be a problem for future re-syncs.
    --This is why these tools are better than just doing a drag-and-drop copy (attributes, rights, ownership, security, preservation). ;)

    /MIR will maintain timestamps in robocopy and also sync folder rights. In rsync, use "-av"

    -v is verbose mode (more output to read, aka "tell me what you're doing" mode)
    -a is archive mode (equal to specifying -rlptgoD)
    -r is recursive (that's like /E in robocopy)
    -l is maintain symlinks rather than copying the symlinked target
    -p is maintain permissions (eg, keep rights)
    -t is maintain timestamps
    -o is maintain owner
    -g is maintain group (also ownership related)
    -D (--devices, --specials)
    --devices, preserve device files (super-user only)
    --specials, preserve special files

    Entirely has to do with speed.

    --The OS will read-ahead of what software requests in a file as long as it's able to do so (a contiguous read), each copy is reading a file start-to-end and writing it start-to-end to the destination. The issue with this is small-files where the OS doesn't know what the copying program (in this case robocopy or rsync) needs to read beyond the open file. eg, the OS can't predict what the next file to be opened is (that's unknown).

    With an SSD that's no-problem, because SSD's have fast random access speeds. Yet a hard-disk this is just not the case for. So, if the OS doesn't know the next file, then the next file in the transfer for sure will be delayed until the next rotation of the disk (this is time that the drive is 'idle'). It's in your favor speed-wise to queue up many files at once (try to open more than one), so that the OS can more intelligently schedule reading these files in less passes across the surface of the disk. Unfortunately this is a tradeoff, since with writing multiple files at once data isn't going out serially one file at a time, and most OS's (Linux and Windows alike) will make the decision to space out files in writing them to the disk, rather than packing them together (which increases free-space fragmentation to try to keep down file-fragmentation).

    The fragmentation is no big deal by the way (for archival), but the transfer itself may also go slower. All modern OS's prioritize reading many files quicker (eg, prioritize perceived "responsiveness of the system") rather than focusing on single files (the big videos). NCQ's orbit will be optimized to maximize the number of files opened rather than maximizing sequential-read speed.

    --It's usually a tradeoff and there'll be some "magic value" for thread count where a certain number of threads in that MT argument will result in the fastest transfer. Past a certain point (too many threads), speed will go down.

    No other benefits, the parallelization is just for speed. The default '8' is a middle ground ballpark guess assuming a mix of small files and big files.

    Yes, it'll be enabled by default. You may want to even go higher depending the situation.

    For example if I wanted to robocopy my MinGW folder off an HDD to a RAMDISK. [87,953 Files, 2,060 Folders, 1.35 GB]
    ^ This is the perfect case for bumping the MT count to 20+, as those are primarily very small text files (headers).

    Just think of it like the "With Great Power Comes Great Responsibility", Spider-Man, Winston Churchill, etc. All risk of batch operations can be avoided by testing the waters first. Start small scale, apply large scale later only after verifying success first. (just use them responsibly)

    I personally think of this as different than a glass statue. A glass statue is chiseled apart with small successive operations, but the automation of this would be more like using a CNC engraving tool that has a detailed blueprint of its cutting path. robocopy and rsync are simply batch operations, repeating the same job over and over.

    So in this case it'd be like if you wanted to move a mound of dirt yourself, handful by handful, vs hiring a thousand employees to do the same task as you.
    --One job is complex and requires central control / leadership, the other is just repetition. As long as you know that one small operation is performed right, you can assert that repeating that operation will get the job done.
     
    Last edited: Jan 21, 2018 at 12:58 AM
  8. 321Boom

    321Boom Active Member

    Messages:
    81
    Likes Received:
    9
    GPU:
    GTX980 Ti
    That's very good to know that rsync and robocopy keep the original timestamp. Good to know that the original copy of the data on the 4TB storage drive will stay untouched and not overwritten. Thanks for the clarification :) It does sound like using these tools has it's benefits to the casual drag-and-drop method.

    I'm in no rush believe me, all that matters to me is that the data gets copied across safely. So I could just leave it set to the default value of 8, or would lowering it make it safer due to less fragmentation?

    By 'enabled by default', you mean if I had to do: robocopy <source> <target> /MIR, it will have the same result as if I typed in robocopy <source> <target> /MIR /MT:8 ?

    Yep, definitely agree on testing the waters first. I'll create a few dummy files on my laptop and try a couple of syncs before I actually go for the real attempt on my gaming rig.

    Yes I know these tools will be very helpful in the future once I'm more familiar with them, but till I actually do it the first time and see that it turned out right is where all the paranoia comes in haha.

    So I read a couple of things about the /MIR command in robocopy:
    1. 'Use the /MIR option with caution - it has the ability to delete a file from both the source and destination under certain conditions.

    This typically occurs if a file/folder in the destination has been deleted, causing ROBOCOPY to mirror the source to the destination. The result is that the same files in the source folder are also deleted. To avoid this situation, never delete any files/folders from the destination - delete them from the source, and then run the backup to mirror the destination to the source.'
    (https://answers.microsoft.com/en-us...e/ad91976f-ebb5-4ba8-88a0-532807f497f0?auth=1)

    Ever heard of this? It's quite terrifying and worrying. This site confirms that other users also experienced the same problem: https://social.technet.microsoft.co...elete-files-from-source-?forum=w7itproinstall

    2. In the 2nd link above, in one of the comments at the end, this guy states 'Mirroring is not safe when involving NTFS reparse points', what exactly are these reparse points? Am I using them? (I tried googling what they are but I couldn't understand it unfortunately, some computer terms are alien to me x_X) The same guy also linked to an article with his findings (you might find this interesting for yourself, it concerns symbolic links): https://mlvnt.com/2018/01/where-robocopy-fails/

    3. I read an article that 'Robocopy fails to mirror file permissions – but works for folder permissions.' (https://blogs.technet.microsoft.com...bocopy-mir-switch-mirroring-file-permissions/) with a work around to also copy security info:
    > ROBOCOPY /Mir <Source> <Target>
    > ROBOCOPY /E /Copy:S /IS /IT <Source> <Target>
    Another command is mentioned in relation to the above two commands: > ROBOCOPY <source> <target> /MIR /SEC /SECFIX

    What exactly are these file permissions and security info? Will I need the above code, or just the /MIR <Source> <Target> in my case?

    4. Due to these hiccups I'm reading about robocopy (especially the /MIR deleting source and destination in questions 1 and 2), would I be safer implementing a method of using one of the external drives with TeraCopy (that is, deleting the previous back up that was on it, and copying everything across again with TeraCopy, similar to the current drag-and-drop method I already use), and using the 2nd external drive for the /MIR command with robocopy? (both externals should have the same data in them at the end of the transfer this way right? and it might be a safeguard against the /MIR issue). Your thoughts?
     
  9. A2Razor

    A2Razor Master Guru

    Messages:
    397
    Likes Received:
    15
    GPU:
    ASUS R9 Fury X
    Yep, you got it. If not specified then 8 copy jobs are used.

    I've never heard of this nor seen it, at least not unless I made a mistake myself [with junction points or swapping the source and destination order]. (just like my warning of doing that earlier in the thread) I'd strongly suspect as other people are concluding in those discussions that these folks (who are claiming missing files or folders in the source) are making the error of flipping their source and destination. Or as it was also put, it really wouldn't make sense to have a mirror command that has two-way deletion. (this would be ambiguous and like playing Russian roulette or rolling dice)

    Put another way, how would robocopy know whether or not a file has been deleted "or" if it was just a newly created file in the source? From this premise you really have to know a direction of the mirror operation as there's no accessible log to show otherwise -- a mirror operation would literally be rolling-dice or relying on some voodoo to make an educated guess on each deletion.

    Very very unlikely if you don't know what they are. You'll "probably" wind up using them after you have a NAS, yet by this point you'll have a better understanding of them too.

    People typically call these junction points, symbolic links, etc. Most modern filesystems support them (including on Windows, FreeBSD, and Linux) ... an easier way to think of this is that it's a filesystem feature to have 'pointers' or 'links' stored rather than the actual file. You've used shortcuts before (for launching programs or going to a folder), junctions are the same concept, yet for software rather than the user. By using these special links, you can store a file (the actual file) anywhere that you wish (even another volume), and substitute in the location that a program is looking for files, with instead a link.

    This has all sorts of purposes, but in jist junctions provide a mechanism to reduce space consumption (explicitly get rid of duplicates) or to shuffle and move around data without software having support added for this. For example: if I plug a new disk drive in to my machine and I decide that I want to move a game to another drive (because I'm running out of space), I could create a junction point from my main disk's partition(s) to the new drive's, move the files over to the new disk and voila (done). <= easier than reinstalling the game right? (same can be done with save-folders, etc, that otherwise would be hard to relocate / hard-coded in the game)

    --Without any registry changes or otherwise the game is now running with the files stored at the new location. The game also probably has no knowledge that it has been moved. Although it's technically possible to determine this and to check for junction points, almost nothing makes an effort to do this [other than some anti-hack software such as GameGuard, and of course file-copy and FTP tools].

    Modern filesystems (NTFS, ReFS, ZFS, EXT, etc, etc) have a concept of permissions and ownership. Each file on your drive(s) is owned by some user, and there are various access rights assigned to each file on an account by account basis.
    -You can actually see these pretty easy under Windows: Right Click on a folder (or file), select properties, and then click on the Security Tab.

    For the types of files that you have (such as your movies and documents), the rights on these files will probably not matter (there wouldn't be a reason to preserve them). For installed software, it "can" matter if programs create a special account that they run under for the sake of security. (... usually doesn't for things like games -- this is more specialty) Just like on Linux, some file servers on Windows will setup an isolated user-account with dropped privileges to limit the sections of your drive that they can access. It's an effort to limit what the software could write to "if" the server were to be compromised.


    Whatever you're most comfortable with doing is probably best to do. I'll repeat though that I've never personally seen robocopy delete files from the source unless it was my fault.

    --Theoretically you can do "bad stuff" such as creating a junction from the destination back to the source. Yet I'm going to classify that as user-error in use of rsync or robocopy too. rsync has some features to protect against this such as specify the device boundary of a copy (not letting the copy operation hop disk to disk). I don't know if rsync implementations on Windows can do this, yet the Linux versions can.


    Teracopy is great in its own ways, and that's really my opinion on it (it's a different type of tool). Teracopy is fantastic for doing a copy where you want to be 100% sure that the copy has arrived at the destination, it can be configured to do a read-back and checking of hashes on each file. I'd say it's a different purpose than robocopy, and I usually go with Teracopy then I want to clone a full drive to another or just write files to my NAS.

    eg, I used Teracopy as my go to when I migrated my storage volume from NTFS to ReFS on this desktop (copying everything to an empty drive, for the verify feature).
     
    Last edited: Jan 22, 2018 at 12:24 AM

Share This Page