How much free space to leave on an external HDD used for storage? + Corruption multiple backups?

Discussion in 'SSD and HDD storage' started by 321Boom, Nov 21, 2017.

  1. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Even so, and even with tried and true methods I still suggest copying some random-files to a temp folder and testing a mock backup operation on them. It's helpful since it makes sure that you have everything such ordering right, that the command is sane, that you understand the arguments and what it does looks like what you expect them to do.

    It's kindof like trust building exercises. Seeing is believing and once you see a command work (importantly as expected) a few times, then you can start to trust it a bit more and especially your understanding of its use.

    To verify that a copy has succeeded you'll need to use "-c" on a second run. Size and timestamp after a copy may match even if the data didn't actually get written correctly, both cases would be considered a difference with the file with -c used. A "--dry-run" is like a mock-run or a no-changes mode, yet a dry-run can still perform the file comparisons [it'll read, but not write or delete]. The difference is that if you specify --dry-run and it was found that files did not match, then beyond the log entries (and console output), no second transfer actually happens. (eg, any difference detected will not be corrected)

    So, let's assume that most of the time the verify is going to find that nothing was changed (given disks are pretty-reliable). Most of the time if you were just to provide "-c" as an argument without --dry-run, it's actually going to be the same as if you were to have also used --dry-run, in that the files will match and nothing is actually changed. (thank god, since we certainly would hope our storage is pretty reliable)

    --Anyway, the whole idea is despite that most of the time nothing is going wrong, if you check each time then you at least know that it worked (for-sure). Even if the first thousand times we copy files everything goes just peachy, it's all about finding that 1001-th that failed, and knowing preemptively so that we can do it again. (getting our mulligan chance)


    Thankfully you got lots of nice hashes in those 24TB of files, since that's probably thousands and thousands of them.

    If you were to generate one hash over a single massive multi-terabyte file (spanning all 24TB), yep the odds of a collision would be statistically significantly higher. (it'd need to be split up probably and done in chunks, or both files read at the same time and compared byte by byte during the read)

    My bet is that "Windows Explorer" and not TeraCopy did this per generating its one-time thumbnail cache (preview). --There's probably not much you can do about the last-access timestamp here, since it was truly accessed to generate it.

    Yep, should be set for both generation and comparison.

    --Far as on saving hashes after a run, this is up to you if you think that you're going to want to re-run a comparison at a later time. Of course hashes can always be "manually" saved even if this isn't checked and at any-time after a transfer completes. (so long as the transaction is still shown in TeraCopy's log)

    To keep both (in the destination) will require renaming one of the two files, since there's a name collision. You're right in that you definitely don't want to skip, overwrite, replace, or anything like that. Each of those choices that you're looking at are supposed to be different ways to deal with the renaming -- like for the older of the two files, the copied (source), or the target (destination). Since you're performing a backup, you'll probably want to select something like "Rename all target files". All of the choices will only rename one on a collision and otherwise have no effect if just copying a new file that doesn't already exist (where no rename would be needed).

    In all cases you'll find a (2) [or a (3) and so on] tacked on the end of one of the files, yep. Personally I'd say that it should be the prior file in the backup beforehand, which is why I'd select to rename the target specifically. I wouldn't use the "older" choice, as it's possible that you might have restored an older-backup to the source (intentionally) and not edited it this time around (eg, undoing changes).

    --The behavior of "keep both" I suspect should be the same (as you found it to), the big button is probably just to have clear visual-buttons without reading through fine-print (for ease of use). It's a bit hard to think of more than those three methods of ways to rename one of the two files in the naming-collision. (I doubt there's a good fourth)

    Paranoia is the right attitude to have when you go about backups, since double checking and pre-testing is the only way to prevent stupid mistakes. It really does work to just re-read everything a few times before you hit enter, and to check again afterwards and read all logs. Always better to preempt problems than deal with them head on later.

    The great thing PC user, and especially a paranoid one, is that you can get an overkill (equally paranoid setup in every aspect) filesystem, software / tools, OS, memory, drives, etc to mirror your concern. I strongly suspect that everyone that works on these things (per their creation) is in the same mindset as us, literally LOL. But yeah, going overboard with redundancy helps reduce it at least a bit. :D
     
    Last edited: Feb 9, 2018
  2. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Yep, this is definitely good advice. I'll surely test on some small-scale random files before going for the actual real back up :)

    So, as an example, the first time I copy with rsync I use:

    <source><destination> -av -x, --one-file-system

    Sorry if I butchered that. Is that correct with the space between the -av and -x, and also between -x and --one-file-system? (-x, --one-file-system seems like it is one whole command from the rsync page: https://www.freebsd.org/cgi/man.cgi...ath=FreeBSD+8.0-RELEASE+and+Ports&format=html)

    Then to verify that everything is safely copied across I would then run the below command straightaway without closing rysnc after the transfer was made:

    <source><destination> -av -x, --one-file-system -c -n, --dry-run? (the -n was next to dry-run in the rsync page you linked me to above)

    Yeah I'd definitely want to check each time, as you said, it might be on the 1000th time, but at least I'll know about it and could re-do the transfer :)

    Sorry I didn't quite get this, why would I want to generate one hash spanning over all 24TB? Isn't the concept generating lots of smaller hashes in hopes they won't collide? Also, from your 24TB hash argument, why would the odds be higher if it's just one hash over the whole 24TB? :/ (if it's just one hash it should be close to impossible to collide wouldn't it?)

    Got it, thanks for the explanation, makes sense about the thumbnail preview. Rsync won't also have this issue would it? Because if so, remember when I said I wanted the original copies of the 4TB storage drive I currently have in the gaming rig to stay untouched and not copied over? Well if the date accessed is different when I rsync back to it, everything which gets a different accessed timestamp will get overwritten :/

    Awesome, I'll leave it set for both then :)

    If I'm backing up into an external drive, there wouldn't be need to do so would there? Earlier in the thread we said this might be useful in case a program tampered with the files, nothing could tamper them in an external if it's barely connected to an OS correct? (sorry I still don't fully understand how this would benefit me, as long as I know the data got copied over safely for archival in an external I don't understand why a re-check of the hashes would be needed). Earlier in the thread you gave the example with the gaming folder after it was updated to compare hashes if one game is working for me, and not for a friend, I get that, but don't see how it would make sense for archival (since it's not being updated, and you don't want it tampered with)

    Got it, 'Rename all target files' does seem like the best option. You mentioned 'Rename the copied (source)', I tried this while testing, and it is NOT renaming the source. I re-tried it several times :/ Only the target is getting renamed. So what is the difference between this and 'Rename all target files'?

    Yep, indeed. It's one of those critical times where you don't want to make a mistake or rush, so being cautious (and paranoid :p) is a good thing in this case xD

    Haha overkill has different meanings to different people though unfortunately. Talking to my friends about this whole thing, they say 3-way mirroring + 2 offline external drives are overkill, for me though, let's just say it'll help me sleep better at night, but definitely not overkill. (Overkill in my case is having another server mirroring my server, in another country (I have family abroad) in case of fire/theft/the roof falling or some other horrible disaster, but then finances, and viability will make it difficult to maintain a server in another country lol :p) (which is why it's SO important to keep off-site back ups)
     
  3. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    "-x" and "--one-file-system" have the same meaning, two representations (a shorthand and a longhand) are given to you so that you can opt to write things out long for readability (closer to English rather than in alphabet-soup), or quick and abbreviated. In essence you can use either, or both as a mixture. You can mix long and shorthand, yet to be clear on this let's give some examples.

    The following should all be valid and the same in meaning:
    Code:
    -a -v -x
    -avx
    -vax
    -xav
    -av -x
    -av --one-file-system
    Hope that's a bit clearer. The dashes can be omit when mixing the abbreviated short-hands with no spaces between arguments.

    Yep the only cost is time, and the amusing aspect there is that since we're either asleep or not home a lot of the day each day ... the cost is actually "nothing" as our machines are otherwise just idle or turned off. Just gotta get in the habit of queuing these checks up for efficiency's sake and they become second nature, not that inconvenient to do.

    You don't want to generate a hash for all 24TB at once, but fortunately it's rare to have individual files this large in practice. Probably the largest files that you'll naturally encounter are for VirtualMachines or backup images of a full disk-drive (aka, if you make a backup from TrueImage or HDM, you can get a massive files out of that). The backup images actually tend to store their hash file by file or block by block. The more the merrier, this is one CORE REASON that backup software prefers to split-files (not just to filesystems like FAT that have limits on max file-size).

    The problem is that only so much information can be stored in a hash, it's only so long and that means there's only so many combinations (stored in a hash). As file size being hashed increases, the length of the hash itself does-not increase. --The collisions you have to worry about are the ones between the source and destination copy of each-file, and not between different files.

    -The smaller the files, the less possible unique combinations that exist (which could be stored within a hash). So, the more hashes used is like splitting the data being represented across them all. (divide and conquer)


    The only archival scenario that it will help in is where you fear some program may have not been closed during the process. In that case you can take the destination hashes and run a comparison on the source afterwards, which tells you if the source changed during the copy.

    --Is this actually useful? Not for someone who's careful (so probably not in your case), a re-copy can also be done to solve that problem if you were concerned that this might have happened during the first copy.
    (*like the suggested way to check with rsync*)

    This sounds like a bug. I've not personally re-tested the choices in a long while, so you might want to check again and shoot the dev a report if that's true. (so he can get it fixed up)

    Oh, three-way mirroring is definitely overkill, backups of most data are also overkill for most "normal people". I even consider half of what I do overkill myself (by my own definition of overkill), yet I personally don't care as I know that I'm a lazy person. I sure as heck won't spend the time to replace what's lost, even if it's possible to replace (unless absolutely REQUIRED) ... thus I might as well make sure it's as hard as possible to lose. The bane of laziness, haha.

    -All things considered though there's that popular saying that if you're going to do something, you might as well do it right so that you can do it only once.


    Also think I like doing things overkill. I also think most people at Guru3D naturally fall in this category. We're all enthusiasts here in different ways with our PC's, some in videocards, some in their data storage, some in overclocking (even when most software doesn't "need it"). We're all anal, lol.
     
    Last edited: Feb 11, 2018
  4. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Wow thanks, that's a really good explanation and makes rsync seem easier to get used to. Good argument about the longhand versions for readability, while rysnc also has the shorthand versions for more advanced/comfortable users.

    So, when mirroring data from the server to an external back up drive, I'd use:

    <source><destination> -avx (should be the same as <source><destination> --archive --verbose --one-file-system)

    Then to verify that it copied across safely and error-free I'd use:
    <source><destination> -avxcn (<source><destination> --archive --verbose --one-file-system --checksum --dry-run)?

    No other commands to tack on?

    Also, I'm assuming I shouldn't close the rysnc window after the transfer (<source><destination> -avx) and run <source><destination> -avxcn instantly after?

    Yep, similar to how the server will be doing it's weekly verify while we're asleep and at work :)

    Yeah I assumed the gaming rig's OS back up with Paragon would be an ISO of around 100GB (OS and a few games), so that's quite a big file, so it's good to know it'll be split into multiple hashes rather than just one for such a big file.

    Got it. Did you ever experience any collisions? Any idea what rsync or TeraCopy would show when they detect a collision?

    Whew ok, good to know it's not something to be concerned for with my use. Believe me I have absolutely nothing running while I'm taking a back up, I want the process to go as smoothly as possible. I could see where people that use Virtual Machines could have different needs for this than I would though. Thanks for the explanation :)

    Got it, I just double checked, it still did it. I'll send him an email after this post ;)

    Sorry but there's something I asked above and I didn't get a reply on, and it keeps bugging me so I'm gonna have to ask it again (my ocd won't leave me in peace believe me, horrible little disorder):
    When we were talking about the Date Accessed changing for video files when copied with TeraCopy, you said it's probably due to the Windows Explorer generating a thumbnail preview, this is what still concerns me: 'makes sense about the thumbnail preview. Rsync won't also have this issue would it? Because if so, remember when I said I wanted the original copies of the 4TB storage drive I currently have in the gaming rig to stay untouched and not copied over? Well if the date accessed is different when I rsync back to it, everything which gets a different accessed timestamp will get overwritten :/'

    Haha believe me, I'm not 'most people', I literally hold my breath when inserting a gaming disc into a pc/console to stiffen my hand and not risk knocking the disc with sides of the disc tray and scratching it (my games and anime are everything to me). Haha that's unexpected, never made you out to be a lazy person after such long informative replies :p Lol at 'bane of laziness'.

    Yep I really believe in that, if doing something, do it right, it'll avoid headaches in the future, and something satisfying about a job well done, rather than knowing you half-assed it.

    Haha indeed about anal. I agree 100%, but it's these people that work harder to achieve their goals and have the best set ups ;)
     

  5. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    These should be the key ones you'd use for cloning your data to and from externals, and or also to the NAS, the main ones you'd use for general backup. On your description of commands, the long and shorthands -- yup looks like you've got a pretty clear understanding of it now.

    --You can run one rsync command after the other in the same terminal window, or run them in different windows (and or close the first terminal and open a new one) -- this shouldn't matter short of running both at once (definitely don't want to do that). Long as your compare is run after the the sync, serially, in sequence, you're golden.

    To be a bit more clear on this part and the hashes with backups, rsync and robocopy would still verify the backup image-file (or segments) copied from disk to disk with single hashes. However, the backup-image itself also keeps its own hashes internally per-file, and you can actually use both TrueImage or HDM to 'check' an image or part of an image before restoration (if wondering, yep they also will automatically during any restoration). eg, both will warn you if an image contains detected corruption in the restore process. (and they'll tell you specifically which files are damaged too)

    ^ It's a good idea to keep a couple of your backups for this reason. (just incase one runs in to a problem during a restore, so you can get the rest of your data from a slightly older image-file if you had to)

    On TrueImage and HDM and the potential for huge image-files, they can also be setup (when you make a backup there's an option to do this) to split the archive in segments. Knowing what you now know about file sync and copying files, and how these tools all operate -- this is a very good idea incase you ever want to copy one of your backup images to an external disk off the NAS.
    (it's the default behavior to split them up [and the default sizes are pretty good], but you can customize it too and specify the split-size)

    Collisions would be undetected by the copy tools (by their nature). If one were to occur when using hash comparisons with rsync or robocopy, then files would not be updated in a sync. eg, the copy of the file with matching hash won't happen, since the tools then believe that the destination file matches the source as a result.

    How you'd notice these as a user of the machine would be the same as damage (something being off): a game not running carrying outdated files, an archive not opening, a movie not playing, damage to images, something just generally being very-wrong (as we're talking about some form of complex corruption). I've never noticed any quirks on this in all the years that I've done clones (and you know me, Mr. Anal on backup and restoration). It's possible that I simply haven't noticed a failure, yet the odds are so low to begin with, with a naturally occurring corruption (from a disk failure or otherwise).

    In the case of a direct copy (overwrite all), the same applies. A file could be copied, be checked after copy, appear to match the source (via comparison) yet actually not match it. The thing to keep in mind here is that naturally occurring damage to the file such as from a failing disk would be very difficult to cause a hash-collision.


    --Just as everything in life, it's hard to drive risk to absolute zero and best to think of this more onpar with doing the best you can. (kindof like if you have a 0.0000001% chance of losing some files in the next 6-years, I'd take that any day over 2-10%)

    TeraCopy uses stronger hashes in its comparisons, so the odds with TeraCopy of this should be lower. (although it'd still be possible)

    My bad on not answering [not my intention there]. I legitimately am trying to respond to everything yet I don't do it "systematically" each time. I reply in chunks, out of order, and apparently forget a point or two occasionally. (out of order quotes to be easier to match my thought process really)


    Anyway, onward to that question:
    The last-access time stamp being updated won't be a problem as it's actually not used in the comparison of files (for the sync operations in rsync, Robocopy, nor TeraCopy in the copy of file-contents). When the last-access stamp alone is updated that means a "read-only" operation (can't alter a file). If the file was both read and also written to (eg, opened in write, or append mode) then the modify timestamp would've been updated as well. Each file copy tool is concerned with modification and creation, as these changing almost for sure means that a file's contents have been changed. (which is why in these cases they don't even bother with the initial hash-compare and go for a fresh copy with a local-copy or sync)

    -Alot of people infact turn off last-access timestamp updates, since this bit of information is not commonly used and is an extra write-operation on each file access.

    It might be more a lack of time and prioritizing it. But yeah I'll call this laziness since I might prioritize playing a game over working to re-create a lost project. When I call something important to me or priceless, I'd have to re-assess it after losing it vs spending that time on something else. (which I absolutely hate doing)
     
    Last edited: Feb 14, 2018
  6. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Glad to know, thanks for the clarifications, rsync isn't looking as difficult as it previously was :)

    Got it, yep definitely not both at once (wouldn't even dream of running a copy and verify at the same, that sounds like a recipe for disaster).

    Sorry, but I don't get why we brought in rsync and robocopy for the backup image of the gaming rig. Wasn't this going to be done with Paragon HDM?

    That sounds very promising about TrueImage and Paragon checking the image for corruption. Seems like lots of things were thought of in the making of these programs. Yep it makes perfect sense keeping a couple of older image back ups in case something went wrong during the latest OS backup.

    Yep I'd definitely prefer to split the archive in segments, it's really neat that it's enabled by default.

    One thing I don't understand though about 'both will warn you if an image contains detected corruption in the restore process. (and they'll tell you specifically which files are damaged too)', how will corruption come about if the back up image is stored on the server with 3-way mirroring? If there had to be corruption detected, wouldn't the other 2 mirrors fix it (with the RAID10 voting logic?) :/

    Ahh ok, now I fully understand the true threat of when a collision happens (since the files won't be copied over cause it'll think it's the same file). Glad to hear you've never experienced a collision :D Hmmm, that's a very bad way/might be too late of noticing that there is an error ('a game not running carrying outdated files, an archive not opening, a movie not playing, damage to images, something just generally being very-wrong (as we're talking about some form of complex corruption)'). So, as an example, let's say I have a folder on the server called 'Pictures' (which is the source), and this was rsynced to an external drive (which is the destination), and there was a collision, so a couple of the images in 'Pictures' (source) did not get copied over. If I had to right-click -> Properties on the source 'Pictures' and do the same on the destination 'Pictures', the Size would not match, correct? (since some files would not have been copied over) (I compare on the bytes in brackets, not the simple number like GB or MB)

    'The thing to keep in mind here is that naturally occurring damage to the file such as from a failing disk would be very difficult to cause a hash-collision.' Is this a typing error? Why would this be very difficult to cause a hash-collision? If the disk is failing isn't that where it is most likely to cause errors? :/

    Yep same here, the lower the risk, the better :) That's why I would like the HGST drives, lower chance of failure rates ;)

    Yep due to having other checksums like SHA instead of MD5 right? So there's no way of making rsync use SHA instead? (or some other form of checksum that is stronger than MD5?)

    No worries :) I know it's difficult to reply to all my questions (it ends up as quite a wall of text sometimes, so it's easy to miss something).

    That really is awesome and puts my mind much more at ease knowing the last-accessed timestamp is ignored.

    '-Alot of people infact turn off last-access timestamp updates, since this bit of information is not commonly used and is an extra write-operation on each file access.' I never knew you could turn off certain features, can you clarify what negative effect this 'extra write-operation on each file access' will have? (longer time to take backups since there is an extra field of information on each file, higher chance of corruption due to extra writes?)

    Yep I completely understand what you mean about lack of time, life really is busy, so to re-create/restore/hunt down all those TBs worth of data again would take an incredible amount of time. Same here, I hate having to re-do something if it was already done (not enough time in the day as it is), that is why this 3-way mirroring really is a blessing and will put my mind more at ease :) (coupled with routinely external back ups, never rely on just one source) ;)
     
  7. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    There will probably come a time where you will have a fresh OS install that works perfectly, that you think would make a good image to keep around and which you'll have verified already [maybe even restored it to another machine already], and so you want a copy of the backup on an external drive, or two, or three. (to use as a replacement for an installation media that you can take with you on the go -- maybe even to work)

    --As an example of a "why"; say that you did a fresh OS install of Windows 7 with all updates (something that takes a pretty long time), and all base tools that you use, pre-configured just the way you like it (everything is setup in the OS perfectly, except drivers which hopefully you left for last). This is a 'fresh image' with no bloat on it yet, no garbage or excess, and a potentially massive time saver (vs doing an install again). Thus, that OS image is a superior starting-point for when you want to do future fresh installs of a machine. (especially given it's lacking drivers and pretty hardware neutral)

    eg, rather than reinstalling the OS from scratch constantly (which can take hours), this is the way to get a machine up and running from scratch in 20 minutes. Then you just need drivers and you're ready to move on to the next PC... This is a way that "IT pros" tend to do it for bulk machine setup. You clone that image to a couple external drives and do many machines concurrently with a LiveCD of some backup software + your external drives. :D


    ^ Bottom line is that you can clone to your machine and you can also clone to other machines too, and this has all sorts of applications. You could just use the backup software multiple times, but if you want the same image on multiple externals it'd be faster to just copy the files over to each.

    TLDR: The "yo dawg I made a backup of your backup", so you can...
    -Restore without access to the NAS.
    -Restore many of the backup-image concurrently. (avoid IO / network choke)
    -Just have a backup the backup. (failsafe / safe box, or prevent accidental deletion)

    These tools don't assume that normal people will have a 3-way mirror (or any redundancy), so that's the main reason the feature exists. Though also remember that image corruption can happen for a number of reasons and it's not just tied to your storage media. (such as failing RAM)

    --Think about if you're backing up your gaming machine, which lacks ECC memory, to the NAS. If a memory error happens in the middle of a backup process then your image right from the get-go may carry some corruption.

    While it's protected once stored on the NAS, this does nothing for making sure the image is actually intact and that it can be restored (after the backup process). Performing a check can make sure that files can at least be decompressed and or decrypted, so that you get at least something out of the archive for each file. Yet, due to all the stuff that can go wrong (per RAM errors), it's of course still not a guarantee that everything is sane in the backup. -- That's another reason it's smart to keep a few.

    A collision can only happen if the size matches between the source and destination, as rsync's first check is based on file size and timestamp. If date doesn't match, only then does it perform a hash check. If size doesn't match then a change is assumed (without the check).

    "-c" will force the check to be performed regardless if the size and date matched, however a size mismatch is still treated as the file having been changed. (with the -c argument used or not used)

    EDIT: Remove double negative typo.

    When disks fail (at least harddrives), usually a few sectors go bad or some portion of the disk surface first (rather than a total failure). SSD's tend to die all at once from what I've seen, but yeah back to the point on this you usually get some isolated partial failure or a complete failure.


    Parts of files get some errors sprinkled in them, yet usually not the entire file or large chunks of it. All of these hash algorithms are intentionally designed that small changes are greatly distanced from eachother. That's what I mean by it being difficult to cause a collision naturally.

    When you notice file damage on a bad HDD the file damage can be pretty wide spread (since it's usually quite awhile before you notice), yet the root of the damage is usually some small cluster of failing sectors. The damage then spreads from here as files are (purely by chance) moved to and from these bad areas. (without getting marked or detected)

    Sadly there's no way to make rsync use something other than MD5 [at least that's implemented in the software right now]. Someone could create their own fork of rsync and replace the MD5 calculation with anything though, given it's OpenSource. (that may already be done in one of the forks although I'm not aware of it)

    The extra write operations could yield more wear on SSD's, though in practice I don't think this matters considering SSD's can write many many many terabytes before dying.

    Last access updates might also slow-down software that constantly opens tons of tiny files, like if a game has thousands of files stored within it that aren't stored in large-archives. Disabling use of last access stamps means skipping all those writes on accessing these files, and that might reduce stutter or speed up a game, especially if stored on an HDD where random access isn't the best.


    Everyone loves reading right? If nothing else you can say we're getting optimal use out of the forums even with a day or two delay between our posts. ;)
     
    Last edited: Feb 19, 2018
    321Boom likes this.
  8. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    This sounds like very good advice and a massive time saver to get everything up and running again instead of re-installing Windows. I used to re-format every year or so when things would start slowing down, but thankfully not seeing this problem yet with my gaming rig after a year and a half old installation. Could it be cause the OS is on an SSD this time? (this is the first time I've had an SSD)

    So, will using the 'fresh image' give the same benefits as if I re-formatted and re-installed Windows from scratch? You know, after a fresh install the pc feels like it's brand new again and mega responsive. No differences at all?

    If so, it sounds like an awesome idea and something I'm very interested in, especially the part about having all of the Windows Updates already installed (Windows 7 takes FOREVER to get all the updates for). So all I will need to achieve this is Paragon HDM?

    Yep, it makes lots of sense not installing any drivers yet before taking this image, so it could be used on multiple machines. Wouldn't Windows Update automatically find the drivers for your detected hardware though? (botching up the clean image in the process) :/

    This is a very interesting idea for me, because I know that Windows 7 support ends in 2020, and it's been worrying me for a while, because I need it to play certain games because they aren't 100% compatible with Windows 10. With support ending, I fear some of the Windows Updates will not be reachable (I need a couple of them to make the games work, like the Japanese language pack and .NET Framework). So, you're saying I could have the 'fresh image' back up with all of the Windows Updates already installed in it, including the language packs and other things I'd need? So basically it won't matter if support ends and these serves won't be reachable anymore, because I'll have everything I already need to play the games in the 'fresh image' back up?

    Haha true, but we're not normal people remember? :p

    This 'corruption from the get-go' only applies for the 'OS image' back ups right? I assume since Paragon HDM will be handling the copy/write process directly (not TeraCopy).

    If I had to copy one of my recorded gameplay videos from the gaming rig to the NAS, since I'm using TeraCopy it should arrive safely correct? (especially due to it's Verify feature, and from there I know it's safely archived on the NAS)

    Yep, I agree and can see where it makes sense keeping a few older OS image back ups.

    This makes it sounds like chances of collisions are VERY rare then if even file size has to match. It's always good running -c just in case though to get a verification by checksum also.

    Sorry I'm a bit confused here, didn't we say earlier in the thread that in a RAID10 array when these bad sectors are detected (should be weekly from the weekly Verify), that it will move all those files to a new sector and flag that sector as bad? (which from there an error will be reported in the log and I just eject and switch out that bad HDD?, and no corruption of data would happen due to the other 2 mirrors still having a copy that's intact?) :/

    That really is a shame, cause apart from that it sounds like such a great program. Something I didn't understand from the rsync page you had linked me to (https://www.freebsd.org/cgi/man.cgi...ath=FreeBSD+8.0-RELEASE+and+Ports&format=html) about the -c command:

    Code:
    Note that rsync always verifies that each transferred  file  was
              correctly reconstructed on the receiving  side by checking a
              whole-file checksum that is generated as the file is trans-
              ferred, but that automatic after-the-transfer verification has
              nothing to do with this option's before-the-transfer "Does this
              file need to be updated?" check.
    
              For protocol 30 and beyond (first supported in 3.0.0), the
              checksum used is MD5. For older protocols, the checksum used is
              MD4.
    The first part I get and it sounds good, but what about 'that automatic after-the-transfer verification has nothing to do with this option's before-the-transfer "Does this file need to be updated?" check'?

    Current versions of FreeBSD use protocol 30 correct? (to benefit from MD5 instead of MD4)

    So I did some research and toyed around with this a bit, and it seems that from Windows Vista onwards the last access time stamp is off by default? (as soon as you said games might perform better I wanted to check haha). I checked on my Windows 7 machine, and also on my Windows 10 laptop with:

    Code:
    C:\> fsutil behavior query disablelastaccess
    and it indeed is saying it is off (DisableLastAccess = 1). For some odd reason though when copying to an external drive the last accessed time stamp is still being updated. If I copy from my 4TB storage drive to my recording drive (2 internal drives in my gaming rig) it doesn't get updated! So it's only updating when copying into externals. :/ What gives? (glad to know it's off in the system though, avoids all those extra writes as you said)

    Haha I actually like reading through forums (older generation :p). Lol yeah, very good use of the forums, this thread almost hit 90 posts xD Glad to see other users are reading through this as well and learning more about how to keep their data safer :) With my paranoia and your expertise, we hit over 6000 views :D
     
  9. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Fresh OS's run fast simply in that there's nothing on the machine. As you accumulate services, apps, games, auto updaters, yada yada, and as the registry and data of the OS grows over time -- that's all more potential "stuff" (no better term to use, since it can be anything really) to read off the drive in booting up the system.


    As the slowdown (especially in boot-time, where people notice it) is primarily from bloat [accumulated traces of past-installed software], you betcha that a faster processor and SSD definitely does mitigate it and does make everything more-tolerable. (at least for longer)

    --It used to be unacceptable for most gamers to have 100+ resident processes on Windows not that long ago, but now I see people with 200 or more on machines. Multi-core CPU's and SSD's definitely changed the way people use PC's, and how much "crap" they tolerate running on them.

    Yep, a clean pre-installed OS image will run just as fast as a fresh install, and with the benefit of omitting the need to do all your setup from scratch. (eg, the extensive personalization of OS settings and installed software)

    Paragon is all you need to do this, yes.

    Better yet: each time that you restore your image (wipe a machine), you can use that as an opportunity to run future OS-updates and then re-perform the backup after. aka, as maintenance to keep the image up to date so that Windows Updates (that have to be run after the restore) don't keep piling up.

    [** other way to do this is to use a VirtualMachine that you run updates in from time to time (VM to physical hardware, with the VM as the clean-base) **]

    With Windows 10, this can be tricky and hard.

    --You can workaround the auto-update driver install by installing with no Internet connection and disabling searching for drivers, OR alternatively using a VirtualMachine for the clean-base (first install) with "very generic" hardware emulated. Nothing that will need specialty drivers downloaded from Windows Update.

    ^ I suggest using a VirtualMachine personally, it's just far easier to deal with 10's update policies this way.

    I'm doubtful that Microsoft will pull their update servers afterwards, though yep that's the idea.

    Through backing up an OS and restoring the backup image, you only have to do the installation of those language packs or anything else you need "one time". This is essentially a much easier way to create a customized way to install an OS (any OS: Linux, Windows, etc) as an alternative to slipstreaming updates, scripting software installs, doing registry merges of settings, etc.

    Indeed we are not.. That's such an understatement, LOL...

    Whereas the average person doesn't even care, I meanwhile fret the stuff such as a game being "pulled from Steam" in the future (not being around in a few decades), or Steam-Cloud not having a copy of my savegames (and then an expansion being released). Or heck, even bandwidth quotas with an ISP if I want to re-play some large titles.

    Yep, the corruption from the get-go would be onpar with TeraCopy mucking up a copy process, which is the whole point of the verification (to have a chance to catch the muck up after it happens). This is effectively HDM's own-implementation of TeraCopy's verify-feature, or their way to show they care about data integrity too.

    That's the incentive to check, check, check.

    Always gotta check that things succeed and never assume they do. Since only once you've made sure can you put your mind to rest on the question if any operation actually has. :D

    Nope, you're definitely not confused here. With multiple RAID mirrors (voting logic), ECC memory, ZFS' hashing, and a stable non overclocked system -- you'll probably find that all copy operations pass over the years that you wind up doing them (of those locally done on the machine). This is talking even through disk failures, replacement, bad memory, and other hardware failures.

    Pretty much the only time that you're actually going to catch problems with hash checks in a copy, restore, etc, is to a single disk, a non-ECC system (like the gaming machine), or over a network connection. Yet in the cases that you do catch a problem, the damage will be "small" to the file that you wind up catching if it's naturally occurring and not from software (eg, malware) tampering with the files.

    --Think of it like a last line of defense.

    I think this is just wording semantics.

    From my read-through, it sounds like what they're trying to do is elaborate on in the second-part that the "-c" argument has no impact on hash calculation for chunks of the transfer (reconstruction on the remote end), and only effects the initial ["Does this file need to be updated?"] check. eg, even if the initial hash check is bypassed from the quick-check (size mismatch for instance), then the file will still be hashed and verified (chunk by chunk) in the transfer to the remote end.

    ^ Unfortunately that network verification doesn't do anything for reading back the file on the remote end from disk, though. (so, -c is still a good idea with a large copy even if files didn't exist on the remote end first)


    -Current BSD versions should come with a modern rsync, and the modern codebase will always use MD5, yep.


    From my Windows 10 LTSB install:
    Code:
    Microsoft Windows [Version 10.0.14393]
    (c) 2016 Microsoft Corporation. All rights reserved.
    
    C:\users\x\desktop>fsutil behavior query disablelastaccess
    DisableLastAccess = 0
    Seems to also be default-ON on my Server 2016 install.
    --I'll have to see with other OS's, but it looks like this is a good idea to double-check.

    I personally don't use the last-access stamp, yet tend to just leave it on with any machine that has SSD's for the main disk(s). [I used to be quite weary of the writes (with my first SSD's), though I've come to accept that most people over-reacted in the past with concerns of wear-leveling & write-limitations]

    Text chat and forums will probably never die. While the new generation may like YouTube tutorials, etc, audio just can't easily be searched nor digested as fast.


    Hopefully paranoid people reading through this realize at the least that they can do as good a job with their data as Google can.

    You don't have to entrust your data to some company ... wondering how safe your data is with them, how they're protecting it, etc. You don't have to be an IT professional to setup a business grade NAS, and you even don't need enterprise hardware to do it either. This is all readily available, and the software is entirely free. (thanks to BSD, Linux, and other open works that run on them)

    --Though TeraCopy, Paragon HDM, Acronis TrueImage, etc, etc, are honestly worth their cost too.

    That's the big take I hope people reading along get out of this. It's much easier than you'd initially think.
     
    Last edited: Feb 21, 2018
  10. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Thanks for the explanation about the SSD and faster processor mitigating the slowdown. I knew they would provide a boost while using the system (SSD is lightning fast compared to a HDD), but didn't know they also had this advantage that it'll be much more tolerable till the next re-install. (who likes re-installing everything right?)

    This really was a very helpful idea you brought up, thanks for suggesting it, you really saved me from a future headache. Yep, makes sense installing future updates to a fresh image to keep it as up to date as possible. This is definitely something I will be implementing. A huge thanks for the suggestion!

    I don't need it for Windows 10 (it's useless for me for gaming). Windows 7 is the last OS that is fully compatible with some games I play. No need for VMs for the Windows 7 fresh image backup?

    Well I'm not saying it's guaranteed that they'll pull the updates, but since I require this OS so much that I need it to be able to run all of my games well (some games date back to 1996 hence the compatibility issues with Win 10), it's a very good idea to have everything stored as an offline file in case something does go awry in the future.

    You're not fretting the small stuff, I have no idea about Steam, but a couple of games were pulled from PSN (PlayStation Network) before, so it is possible for something like that to happen to Steam.

    Awesome advice about Paragon, thanks, looks like a very well made program which I will be investing in :)

    This could be quite problematic (and time consuming) :/ Picture moving several gaming recordings to the NAS, I'd need to open (and view) each mp4 file to make sure it actually got there safely corruption-free, even though the Verify said it's fine?

    That's very good about the copy operations done locally on the machine, but what about:

    'Pretty much the only time that you're actually going to catch problems with hash checks in a copy, restore, etc, is to a single disk, a non-ECC system (like the gaming machine), or over a network connection.' Won't the ECC desktop be saving everything to the NAS over a network connection? :/

    Hmmm, that could be quite a problem for people transferring data over a network, but in my case which I'll be rsyncing to external drives instead, will I still be affected by this problem? (since no network verification is needed)

    Awesome about the current versions of FreeBSD btw, thanks.

    Huh, that's weird, both my machines (Win 7 and Win 10) are off by default :/ (I just checked again) I'm sure I've never tampered with this in the past before cause I just heard about disabling the Last Access timestamp from you. Oh well, no harm done at least, glad it's been off all this time xD Yeah I've read about the write-limitation on SSDs being an over-reaction, and more of an issue back when SSDs were introduced, rather than for current SSDs. (still better having it off though to avoid those 'tiny' writes during games)

    Yep, indeed about being easier to search, very good point. Plus there's something soothing about reading through a forum in some peace and quiet rather than all the noise and chatter on a Youtube tutorial (maybe I really am getting old haha).

    Yeah, never trust your data in the hands of others, it's nowhere near as important to them as it is to you, so you gotta be the one to make the effort and take the necessary steps in setting up a system worth keeping your data safe, up to your standards.

    I fully agree and can vouch for TeraCopy. Paragon HDM looks like a real problem solver for me too, while I haven't tried it out yet, it sounds like it will be worth it's cost.

    Yep, hoping some other paranoid souls will start investing in building servers too :D
     

  11. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Ah, for the "check, check, check" part, I meant more just not doing a blind copy (like with Windows Explorer). Having a read-back and hash-check being pretty-darn-good for having checked the files.
    --TeraCopy to the rescue here for automating it, basically.

    If TeraCopy reads back the files from your mounted NAS and the files pass a verify (or to an external disk, etc), odds are pretty strong that you're golden and the files were copied correctly to their destination.

    Fun fact: "TCP" actually has error-checking built in, yet this error-checking is just basic check-summing of each packet (nothing like MD5, SHA, etc). Most of the time even the sum (in each packet) will catch network errors and the bad packet will be retransmit. However, just as with everything else in computers there's also the problem of non-transmission corruption of packets, such as with bad-drivers and bad-implementations of hardware offloading where a packet is accepted even though it shouldn't have passed the checksum. (There's other problems like man in the middle tampering when over the Internet too)

    Anyway, locally over a LAN, anything with a verify that reads back the file(s) after uploading to storage should be good enough to catch these failures. I seem to recall that hardware offloading is default "disabled" in FreeNAS as well.. There's still always the potential for errors from the sending side (your machine uploading -- the source) though, thus the importance of always having some type of a verify of transferred data.


    ^ On that note:
    You've probably encountered at some point in your life (or seen someone with this) a problem where you (or some other person) just randomly disconnects from a game. [Instant Disconnect, BAM] If not caused by a bad router, the ISP, or some Internet Gateway (eg, if it also happens in a LAN-Game / Local-Play) that's usually network-corruption getting past the TCP checks. (corrupt data making it through to the game-server, which in turn makes the decision to kill that client)

    --Usually updated NIC drivers solves this. Disabling all forms of offloading also tends to. (less of an issue with Windows 10 due to the way WUA pushes driver updates now)

    Right, this won't effect you and you can pretty much ignore that second section of their description.
    --Although even with the transfer protection, using "-c" with a re-run is still highly recommended to catch any problem.

    (to a NAS with ZFS + redundancy, you'd be pretty-good even without the read-back re-test)

    It's possible that something made a change "automatically" on either of our machines. These two systems of mine aren't clean by this point too (in that they've had plenty of dev-tools installed). -I'll try to remember on the next clean-install I do to re-check.

    That's a good point.

    I'm also worried that some services can be just shut-down in entirety since it's very hard to predict the future, which companies will fail, etc. Consider like OnLive is no longer around today, would the same happen with "Geforce Now" in a decade or two?

    Granted most games that you buy through Geforce Now, you also get a secondary copy of through say, Steam. Yet hypothetically anything can happen. If you have the local copy of the game, you can always workaround the service you attained it through being down. Yet short of that you have no legal-means to reattain the game content.

    --Even though Steam is huge today, what about in the future? (can some other marketplace take over?)

    That's quite scary when you think of that you have hundreds of games that you have no physical copy of and that rely on some services run by one or two companies. HEH

    For Windows 7 it should be a non-issue as long as you do the install with the bios set to AHCI mode (even if you have to F6-install drivers for the disk controller to do the first install).
    --Windows Update is far less aggressive for drivers on 7. (compared to 10)


    **Though Windows 7 may need a peer-to-peer adjust to get it working on newer hardware, like when you image to other systems (modern disk controllers being a pain here). Paragon HDM can actually do this adjust for you and automatically-insert the drivers needed to get the OS to boot (most of the time).

    Yep, here's hoping. :D

    Hopefully people go the preemptive route rather than waiting till it's too late.
     
    Last edited: Feb 24, 2018
  12. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    That's awesome, thanks, more points for TeraCopy xD What would be the safest way of saving new data though? (Example, after finding a new image gallery full of anime art you'd like to save) Saving it directly to the NAS (so default download location would be set to somewhere on the server), or saving it first on the ECC machine, and from there moving it with TeraCopy to the NAS (to get the Verify feature)? Or would the server have better ways of making sure the data got saved correctly if saved directly onto it due to it using ZFS and RAID10 while handling the write-process?

    That's good to know about TCP, but it sounds like quite a weak form of verify :/

    A question below regarding the highlighted parts:
    'Anyway, locally over a LAN, anything with a verify that reads back the file(s) after uploading to storage should be good enough to catch these failures. I seem to recall that hardware offloading is default "disabled" in FreeNAS as well.. There's still always the potential for errors from the sending side (your machine uploading -- the source) though, thus the importance of always having some type of a verify of transferred data.'

    In relation to the first question in this post, if saving directly onto the server is there this 'verify of transferred data'?

    Huh, never knew it could affect an online game too. Thought corruption was more for stored data rather than a live game (a match being hosted). Thanks, always learning more xD

    Awesome, thanks, I'll still run the -c -dry-run just to be safe. Could never take it for granted, remember, the externals are the last line of defense ;)

    Really is weird, especially when I don't even use the same tools/programs on both my machines since they each serve very different purposes from each other (one's just a basic everyday laptop, when the other is a full blown gaming and recording machine), and both have different OS too.

    Oh well, doesn't really matter in the end, I'm glad mine was defaulted to off in both cases, it is weird that we have different defaults though on several machines xD

    Yep it very well could indeed, there were several consoles that were big back in the day (Amiga, NES, Commodore 64 etc) which are no longer around today (or no-one cares for anymore) so if the same had to happen to Steam or these other online stores, all that digital content is lost :( Very worrying, plus I also prefer having the physical disc for aesthetic value, looks nice having a bookshelf/showcase full of games (you know, kinda like these people that have a study, but picture it full of games instead of books xD). Digital, everything is saved on a hdd with nothing to show for it :( I miss the days when a game just came out complete on disc, no patches or dlcs and the like. (going back to the PS2 days!!)

    Yep I've already got my BIOS set to AHCI. Read it helps greatly with SSDs. I never had to do this 'F6-install drivers for disk controller' before :/ Everything just worked outside of the box.

    That's a really neat feature about Paragon HDM, thanks xD I assume it will need some form of internet connection to find the suitable driver?

    Here's where I'm gonna get screwed though, when Windows 7 support eventually does end, it won't be safe to use the gaming rig with an internet connection anymore right? Won't this be a problem since I need to connect it to the server to upload the gameplay recordings?

    Well we are on Guru3d here, home of the enthusiasts, I'm sure a couple of users are already ordering server parts haha :p
     
  13. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Directly to the NAS is the best bet. You could either do that from the NAS itself (logged in via SSH), or just saving to your mounted volume. Either way that you do it, it's still possible for download corruption to occur. (during the download)

    --You won't have a "verify" for downloading to the NAS (via network mount), yet there's also the potential that a download straight from the Internet can come in with errors just the same. At the very least, if you're downloading video, art, etc, odds are you going to be viewing it right after you download it (at least once) -- might as well do the initial playback or viewing "off the NAS", so that you know the data is usable on it.

    eg, you'll at least have the human factor of "did my download work" (can I watch it) after it's saved in its final resting place. [without viewing it twice]

    Indeed it's not. It's especially easy to tamper with TCP packets, and some more complicated Internet Gateways even clobber certain fields or coalesce packets together (into larger ones). Man in the middle tampering with fields like the checksum is actually fairly common, and not necessarily malicious (like in the case of coalescing). --Still, it's much better than having no checks at all.

    Sadly there isn't. At least not unless the download-source (author who put it online) is nice enough to list some computed-hashes that you can use for verification (which often exist, at least in the OpenSource world with Linux and BSD distros). Keep in mind the same would still be true even if this data is downloaded straight on the NAS itself. (assuming just HTTP or HTTPS used in the transfer)

    --Short of protocols that have support to check data themselves [eg, with something like rsync], you're basically relying on TCP's built in integrity check to make sure it reaches your end un-corrupt. (the packet checksums)

    Yeah, this is definitely one of the big shames of moving to digital distribution (although it's probably inevitable due to convenience). It's much harder to be a collector in a meaningful-way when you can't rig a sweet display of your collection anymore. It really lacks the "cool-factor" being unable to show things off other than online virtually (on some webpage).

    Data can be stored and duplicated without deterioration, yet the original boxed-products are much harder to keep in pristine condition (as they wear down with time & use) .. and it's certainly much more rare to find people who cared enough to do so.

    It probably has to do with the virtualization tools I use for platform-testing. Wouldn't surprise me in the slightest if one of them changed the OS's settings, though I'm not certain on this.

    Actually helps a ton with both (HDD's and SSD's). In IDE mode the OS is unable to send TRIM to SSD's (which means you're relying on the disk's GC feature to release free-space back to wear-leveling), similarly in IDE mode on HDD's you don't have NCQ (Native Command Queuing) -- the ability to pick an elliptical orbit for the spindle in reading or writing data (rather than circular).

    When lots of writes are queued up on a hard-disk, the OS with elliptical orbits can select a more optimal path across the disk surface to read more random-access type data faster. NCQ on average raises both non-sequential read and write speeds, and thus makes the system much more responsive, faster to boot (usually), start games, etc. (just in general getting more IO out of your disk-surface per rotation)

    --Nope, no connection required. Paragon pushes out common drivers (if you keep the software updated and re-burn your standalone LiveCD from time to time).

    You can also F6-style (like during the Windows setup) provide drivers to Paragon for accessing a disk (if HDM can't access your drives, and if you're using the WinPE based bootable -- meaning the LiveCD can use these drivers too). Once drivers are loaded by HDM, and once you get it able to access the disk controller on your new machine (to restore an image to), then HDM will use anything you've loaded for insertion in the restored OS too. [to make the OS start proper without a 7B -- inaccessible boot device -- bluescreen]

    Probably even after support ends for Windows 7 there'll still be at least AntiVirus tools that you can run on the OS (meaning at least something updated for new-threats). Though yes ... presumably it'd be smart to stop doing web-surfing under 7 after security fixes are halted.

    eg, just use it for gaming alone and be more careful thereafter, may be a good use for two machines at your desk. If not, there's always browsing inside a VirtualMachine and or also using something such as Sandboxie.

    **OS security matters of course
    , yet if you can stop software from touching the machine in the first place, then the vulnerabilities in the underlying OS can at least be somewhat mitigated. You'll just have to be a lot more careful with what touches the machine.

    Yep!
    --Afterall we're typing all this up in a forum dedicated to disk drives, controllers, and importantly storage; gotta be alot of like minded people in here, heh.
     
    321Boom likes this.
  14. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Sorry I've been out for so long, had a crippling shoulder and back pain (previous injury from when I was young comes back to haunt me occasionally). Even had to take some time off work, but much better now :) Been 2 weeks without gaming x_X

    Got it, I guess I'll be saving to the mounted volume then, rather than from the NAS itself, since I will be using the ECC desktop to do all the browsing and saving of new data. Regarding 'download corruption', it's the same risk as if saving directly to a hard drive (like I'm doing currently with my 4TB storage drive in my gaming rig), or higher risk since it's saved through a network mount/mounted volume?

    Regarding your comment 'might as well do the initial playback or viewing "off the NAS", so that you know the data is usable on it.', by "off the NAS", you mean using the server directly (as in I'll be using FreeNAS as the operating system to view the files), or through the ECC desktop off of the server (remember most stuff I'll be saving directly to the server as a default download location). If it's usable/readable on the ECC desktop reading off of the server, wouldn't it be readable from any machine that is connected to the server? (and even on the server itself for that matter)

    I guess it is better than nothing, but someone really should develop or implement something which is more refined when you consider how many more secure options are already available (like MD5 and SHA).

    Earlier in the thread, you mentioned 'I seem to recall that hardware offloading is default "disabled" in FreeNAS as well..', is this something I should note to make sure it is disabled? (what would be the disadvantages of having it enabled?). Some quick research shows it will increase performance having it enabled, but I'm assuming this comes at a trade-off as usual (especially since I think you're suggesting it should be disabled).

    I guess this is why it's so important to check / view after downloading then, to make sure it got there safely. That could be quite a time consuming exercise though :/ While art wouldn't take as long, it could be a problem for video/music/games. So, for video/music files, just make sure the file opens, or you'd have to view the whole thing in case it crashes/is corrupt further on in the video file? What about games?

    Yeah it really is a terrible shame, while it is convenient (could play the game straight away on release within minutes, no need to go to the store to pick it up or wait for it to arrive in the mail, and takes up far less storage space too), there are still some major drawbacks about digital which is a huge pill to swallow (no collector's value, and god forbid the servers/stores like Steam/PSN etc die) x_X

    Yep I agree that data can be duplicated, but with the huge file sizes games carry now, that's becoming quite an issue too (I heard Final Fantasy XV is around 80GB!!). Good point about the deterioration of the discs, you gave my ocd something else to go nuts about :p

    That's awesome, thanks for the detailed explanation about it. Nice knowing the brief physics of how the orbit of the disc affects the write speeds. Very interesting :)

    Wow that's really cool about Paragon, it really makes the software worth paying for for it's ease of use.

    Sorry I'm lost on this F6-drivers thing. I've never had to do this before, not even on previous Windows installs (usually I just pop in the disc, follow the prompts, then install the drivers after the OS finished installing). Is this only for when using LiveCDs?

    I assume before using the LiveCD to restore the fresh image it would still be a good idea to format the SSD where the OS will be going onto again correct?

    Yep, planned to use the gaming rig for gaming alone once the server is built. No web-surfing from the gaming rig wouldn't be a problem, there's the ECC desktop for that. Only time I'd need the web from the gaming rig is to get a new version of a program/driver updates.

    So, even if it's connected to the internet (I need an internet connection for it to see the server as a mounted volume) but I don't visit any websites, it's safe? (even though the security fixes wouldn't still be getting released?) As in, even though I'm not doing any browsing, isn't there still a connection present which something malicious could enter from?

    Well basically the only software that will be running on the gaming rig are games and a recording program, so very limited amount of software (plus I don't play online, shmups are single-player anyway).

    Glad to be back to continue riddling this thread with questions haha :p Hopefully other members are finding your advice as helpful as I am ;)
     
  15. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Ouch on both accounts, both the pain and the being unable to even play game. (that's gotta be some serious pain then)

    When you send data to your NAS this is also usually a TCP-connection (eg, with something like a Windows-File Share / Samba). The risk of corruption is pretty-low over the LAN vs to over the Internet (more hops involved, more devices, more distance, **more that can go wrong in general**) ... yet it can also happen on both and "could" happen between the NAS and your workstation. However, keep in mind that since there's no guarantee that the download from the Internet to your machine (over say HTTP or FTP) is going to come in intact without-damage, I don't personally consider this any more of a risk when considering downloading to a network mount vs downloading to a local-drive (on the machine itself).

    ^ Because either way that you do it there's some degree of risk. (if the TCP checksums don't catch the error)

    By this I mean streaming it off the NAS to your media-PC or wherever on your network that you want to watch it. (eg, probably on a Television driven by an Atom or ARM)

    There's actually been quite a few talks on this in terms of refining TCP, or what the next evolution of the Internet should be (per if we need new low-level protocols -- lots of proposed exist). In general I think that most people don't consider it a problem as any developer can implement pretty much anything they want (such as additional transmission security) ontop of the existing (TCP / UDP).

    --For example, FTP (old) relies on just TCP's checks. SFTP (S for secure) is fully encrypted, and due to the encryption it has additional strong integrity checking to prevent man in the middle attacks (most people today would probably call SFTP the successor and replacement of FTP). Protocols like SFTP provide extremely strong protection for a network mount too, though at a significantly higher CPU cost / overhead. They can be mounted such as with the SSHFS project (built on FUSE -- usermode FS), some network-mount options exist for Windows too (though pretty much all are commercial).

    Yep, the tradeoff is using the heavily tested (pretty much guaranteed working) OS implementation (purely in software), vs the hardware offloading implementation handled by the drivers and NIC hardware. Using the NIC hardware can reduce CPU overhead of the NAS (or of your client computers), HOWEVER, modern CPU's are so fast as-is that this usually doesn't even matter until 10gbit speeds.

    ^ Pretty much I always elect to use the OS' implementation and to disable hardware offloading where I can, yet I've also seen the effects of absolutely horrible NIC drivers, to the point where the entire system (on fresh installs) got corrupt from bad-downloads (because of the offloading). From my view, any saved CPU cycles is just not worth the risk of the offloading implementation being broken / not working right.

    Right, yet when you download a game and play it to completion, that's a check. When you watch a movie once, similarly you're done after that.

    Presumably everything that you download you wind up using "at-least" one time (without this becoming a chore). With games thankfully most of the digital distribution systems these days seem to have added-integrity checking built in ... as the size of games becomes massive, it seems that download corruption is becoming more and more common.

    This is something you'll only see in WinPE / LiveCD Windows. The F6-install is a very brief prompt that comes up at the start (of an installation) where the media asks you to hit F6 "if" you have additional drivers (eg, disk controller drivers) that are required to get through the installation. This becomes more of a problem when you have "exotic" hardware (like some RAID controller), but yes as time goes on it may become a requirement just due to OS's (eg, Win7) aging and new hardware coming out.

    HDM can restore your partitions, delete existing partitions (on the drive that it's restoring to), and even resize the destination partitions at the restoration time. No extra tool or step needed there.


    Yep that's the idea. Provided that you make sure that Windows Firewall is restricted (no inbound servers, file-shares, etc), and that you in general don't allow inbound access to the machine --- then the only way for your Win7 install to become compromised is by execution "on the machine itself".

    I'd strongly suggest making it a habit to install a replacement webbrowser [especially with Internet Explorer updates being stopped], something like Sandboxie, and an AV as the first things that go on the machine after a fresh install. [for the short incursion on to the web to get everything else you need]

    eg, copy these to the machine via a flash-drive before you start browsing, and keep the machine offline "until it has some protection" basically.


    Glad to hear that you're feeling better and recuperating! :)
     

  16. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Indeed, a dislocated shoulder and a disc in my lower back is moved out of place. Back in my skateboarding days 10 years ago, I used to love jumping off stairs and high places. It's an awesome sport, but it comes with it's fair share of injuries (some of which never full heal and keep coming back to remind you).

    Hmm, I guess the general rule is to just always check then, no matter how you're saving something (whether directly to a hdd or over network-mount), since download corruption can occur. As you're saying, both carry some degree of risk, so either way, it's best to double check and make sure the files actually managed to get saved intact. Thanks for the emphasis on how important this checking is.

    Got it, any machine then. In my case it'll be the ECC desktop and the gaming rig (I like working through pcs where possible, added codecs and much more functionality and options in media players like VLC etc offer).

    I know this is off topic from data, but just asking in case you encountered the same problem since you have lots of machines too. I'm planning on hooking up both the gaming rig, and the ECC desktop to the same tv (the server I'll be hooking up to a small pc monitor on it's own). Problem with this is the tv only has 1 PC HDMI port, and 2 normal HDMI ports (that you'd usually connect consoles or cable boxes to). Colours are a bit funny when using the normal HDMI ports (not the PC one) on a pc, kind of like they bleed into each other sometimes, and are a bit too bright. I currently have this problem with my laptop (which is connected to one of the normal HDMI ports, since the gaming rig is on the PC HDMI port) but I live with it, but for the ECC desktop where I'll be viewing art and my gameplay recordings it's quite a problem. Any idea about this? (It has a VGA port as well, but who wants to use VGA when we have HDMI? VGA was giving the image a very washed-out look)

    Better checks for TCP really should be implemented by default, especially since most of the world wouldn't know how to go about mounting added security.

    So this SSHFS sounds interesting. In their main page: 'Most SSH servers support and enable this SFTP access by default, so SSHFS is very simple to use - there's nothing to do on the server-side.' Why would I need to install SSHFS then if SFTP is already enabled by default on the server?

    What exactly do you mean by -- usermode FS? (by FS you mean File System?) I'm not sure if I'm misinterpreting this, but will this work on ZFS, since it's also another type of File System?

    Would I require any of the Windows network-mount options since everything is going to be saved to the server and not on my Windows machines?

    Got it and noted, sounds like a very good reason to have hardware offloading disabled then. Thanks for the insight. Most important thing is the safety of the data, not performance.

    That doesn't sound so good that a game has to be finished till completion to make sure it's a fully readable copy :/ Some games have been sitting in my hdd for ages and not tried or tested yet (so many games, so little time). While I get that I will eventually end up trying the game out in the future, it could be months down the line till this happens for certain games.

    Same could be said for some gameplay videos, like seeing someone pull off some marvelous achievement in a Youtube video, so it would be worth downloading to have a copy of it for reference. I wouldn't usually re-watch this instantly if I just saw it on Youtube :/

    Ahh ok, thanks for the explanation, I'll keep this in mind for when I need to use LiveCDs. Is there a reason why such an emphasis on disk controller drivers? I understand that these are required to be able to install the Windows fresh image onto, but isn't this the same as when installing off of the Windows disc? There are no additional drivers present there at that point either (just what Windows 7 has per-loaded), so wouldn't those also be in the image we are restoring?

    Amazing, definitely a worthy tool to purchase once I get the server set up. It will serve me well having a fully updated Windows 7 image with pre-installed language packs. Thank you so much for suggesting this method. Apart from being a massive time-saver when it comes to re-installs, it's also nice knowing I have a fully functional copy of that OS (since it is required to play some of my games due to compatibility issues).

    Thanks for the advice about being more cautious on the Windows 7 machine. Regarding the firewall being restricted (no inbound connections), wouldn't this be an issue since I'd need to copy some games that are archived on the server to my gaming rig's SSD to play? (I know a flash drive could be used to move the file, but for convenience sake?)

    So here's another small headache I've been wrapping my head in circles about... you've probably heard of these Meltdown and Spectre patches which some claim to have seen performance hits with, especially on heavy I/O intensive tasks. Well, while games weren't heavily affected by this thankfully, I've done some research on how this might affect the recording program I use (since I assume that's very I/O intensive, OBS Studio), and some users have reported going from having 0.1% rendering lag, shoot all the way up to 8%!!! (that will produce a horrible looking recording). I need a workaround for this, so, here's how it's working out in my head, you tell me where I'm mistaken:
    I'm thinking, once the ECC desktop is built, my gaming rig won't even need access to the internet, only to occasionally hook up to the server to upload my new gameplay recordings to for archival, take games out of storage from the sever, and download some program/driver updates. Would this work:
    Step 1. Using this .bat file to quickly enable / disable the internet when required (https://www.labnol.org/software/disable-internet-connection/26235/) (suggesting the .bat file cause Windows 7 takes ages enabling the internet again when doing this through Control Panel, is the .bat file safe/look fishy to you?)
    Step 2. Using InSpectre to Disable the Meltdown and Spectre patches to avoid the performance hit and get the recording program running at full power again. (http://www.guru3d.com/files-details/download-inspectre.html) Since there's no internet connection active, I'm at no risk having the patches disabled right?
    Step 3. After a couple of good gaming sessions with some recordings worth uploading to the server, use InSpectre to re-activate the patches, and the .bat file to re-enable the internet so I can see the server as a mounted volume again and upload the recordings.

    Keep repeating steps 1-3. This won't cause trouble/complications to the server having the gaming rig constantly mounting / unmounting to it?

    Thank you, greatly appreciate it my friend :D Glad to be back and getting back to the things that matter (gaming and pc related stuff) ;) unfortunately recuperating also means going back to work x_X was nice waking up late and watching anime all day haha :p (I'm still a kid at heart lol)

    Thanks once again for all your time in our correspondence :)
     
  17. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Ah, injuries that never heal, that part I know far too well. Doesn't even take doing much physical for it to happen, and it can in a split second.. I had the pleasure of tripping over a hungry cat "wanting food" (closely following) while carrying a box down a flight of stairs. Cat was just fine, I on the otherhand got a sprain with completely detached ligaments from the leg being twisted while falling over the first-step.

    Lots of years later, circulation is just completely mucked in that leg. Leg usable, got back full muscle control of the ankle a few months later, though the pain never quite ended. I know people like to say that what doesn't kill you makes you stronger ... it's just not true, lol.

    Yep, modern installshields usually integrity-check themselves on launch. Always look for other ways to check what you download though, if there's a hash, always use it. Hashes also help detect tampering with the download files, like if you download something from an un-trusted mirror. (some place the author didn't upload it)

    Never had a problem quite like that. Have had insufficient sources on a TV to select all game consoles though. I suspect a workaround is to get yourself an HDMI Switch (multi input to one HDMI output), given you have one port on the TV that's working solid on everything and there's just not enough of it -- (the PC port)

    --They cost about 20-30$ (USD) for a 4 port switch, so they're quite cheap. Some even support advanced features like overlays and a PIP window and a remote-control for changing sources without hitting buttons on the switcher box.

    Main advantage with making changes to the low-level networking protocols is that they can then be totally transparent to existing software. Like with mptcp (adding multihoming support to TCP), that requires absolutely no-application-changes and works with everything (just requires support for it in the Kernel). On the otherhand, even without new protocols there's already solutions that are transparent "kindof" such as using tunneling software.

    In a sense you can think of SFTP as FTP over SSH. That's a bit oversimplified, though it helps to think of SFTP as just using SSH for its foundation for security and login credentials. At the same time you can run anything through SSH, and infact even Windows file sharing through SSH, or remote control protocols such as VNC or RDP.

    SSHFS is for the client-end (something like a media-PC), it gives you the ability to mount an SFTP file-share / folder as if it were a drive on your PC.

    --In Windows terms, think of that like if your SFTP share had a drive-letter and you can just "file > open", select the drive-letter and pick any file that you want to play [like Windows file sharing can operate]. That means that a program such as VLC Media Player can directly play files from the share without first copying them to your local-drive.

    Normally when you're accessing your disk drives, that's all done within the kernel. That is to say the OS (kernel) and your controller drivers define the file-system, how to access it, and how to get at your storage device. That all makes sense for it to be done this way, because your data is stored on a physical-device inside of your computer (and software all access data through said OS in a shared manner)... Yet what if you wanted a filesystem inside a filesystem, virtual disks stored on a disk, or even network-drives that act like an internal-drive? There's potentially no longer physical hardware directly involved here, and it actually gets exceptionally difficult to design these things with a purely kernel model.


    --FUSE to the rescue.

    FUSE as a concept provides "normal-software" (user-land) a way to create and expose mountable volumes without additional kernel-mode code. Of course all that stuff like networking is easier when you don't have to think of how you're going to ferry that data to some kernel-mode driver. FUSE eliminates the need for designing this interface.

    Only if you don't want to use Windows' built in file sharing for accessing your NAS. There's advantages and disadvantages to doing it either way through third party mounting tools or Windows' built in. The Windows built in definitely is going to be the most stable, though it may not be the most secure.

    --It's also possible to run your Windows share mount through something like SSH to the NAS.

    Games sitting on your HDD's for ages, not much you can do there, since 'rot' can happen over time. You'll just have to hope you get lucky and that your disk(s) haven't failed you to date. With videos, fortunately even if some errors happen in a download, usually the video will continue playing with a frame or so being damaged. Videos formats are pretty versatile since they're usually "built for streaming" (corruption is just like Internet packet loss or underrun / skip ahead on the fly).

    Yep same concept, and for the same problem too. On some machines you'll find that a Windows 7 install disk actually won't detect any drives (without adding drivers). Emphasis is on the disk controller since that's the most critical piece of hardware to getting through an installation (since you can't install if you can't write files).. Once through the install / booted in Windows, getting the rest of hardware working is no problem. IMO, second worst show-stopper for getting through the Windows install is "USB", although alot of motherboards have PS2 keyboard support.

    --The drivers that you need are on the image that you're restoring... Yes, yet to restore that image HDM has to likewise be able to access your drive to write the OS image to it (just like the installer had to). In the case of a WinPE bootable, HDM suffers all limitations of Windows, so -- if Windows needed drivers, so does HDM's bootable. Fortunately, Paragon's media builder will automatically bundle new collections of these drivers for various hardware when you build your restore-CD. (Paragon updates this routinely)

    ^ I've found it 'rare' to need to do anything other than burn a new disk, yet you have the option of manual control just the same.

    It can be, though remember that all the data on your NAS is being presumably scanned while it's being accessed by AntiVirus on your machine and the other machines that you use. Your NAS' files can become infected, just like your hard-disk, and just like your USB devices by being connected to infected machines that can write to them.

    --If you get really paranoid about this, I suggest creating set of hashes for the contents of your NAS ... those hashes are so that you can 'check' if you ever suspect something is awry (that files are being changed on you). Another option might be to mount the NAS as read-only or "partially restricted" (making most folders write-locked) on machines that only need to read data. [like a gaming PC that only needs to fetch the games you want to play out of storage]

    What I mean by that for example, is that rather than completely locking down your Windows 7 rig you might just only give the Windows 7 machine write access to one folder. So, even if the machine gets compromised, it can't really touch much of your storage.


    Long story short, if malware can touch a drive, it can propagate through a drive from one machine to another.

    Looks pretty harmless in this case. I would suggest that when in doubt that you inspect-scripts (open them in a text editor) and read them line-by-line to see what they do. (looking up everything contained)

    Code:
    GOTO LABNOL
    
    Script to toggle your Internet Connection
    Run as Admin to enable or disable Internet
    
    Written by Amit Agarwal on 10/19/2012
    See http://labnol.org/?p=26235 for details
    
    Distribution allowed but with attribution
    
    :LABNOL
    
    @SET TASK=ENABLE
    
    FOR /F "delims== tokens=2" %%i IN ('wmic nic where physicaladapter^=true get netenabled /value ^| find "NetEnabled"') DO @SET STATUS=%%i
    
    IF %STATUS%==TRUE @SET TASK=DISABLE
    
    WMIC PATH Win32_NetworkAdapter WHERE PhysicalAdapter=TRUE CALL %TASK%
    --The first portion uses GOTO and a label to 'skip' execution of the starting comments, and the last three lines are the actual behavior. Which loop over adapters, and disable them.

    Wouldn't cause problems. Think that I would personally just restrict what the machine can read and write on the NAS. [eg, gaming machine only needs gaming files, it doesn't need access to text-files with your bank password or financial records -- most extreme examples I can think of]

    --Your NAS will have strong access control granularity, and you can use this to minimize damage from a compromised machine that can access it.

    Definitely applies to me too.. nothing to be ashamed of there, haha.
     
    321Boom likes this.
  18. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Ouch, sorry to hear about your injury, especially with having trouble with circulation for years after the incident :( That must have been quite an ordeal :/ At least the cat made it out fine :p Indeed, you could get hurt in a second, but keep feeling it's effects for years.

    How would I find these hashes? Is this just for games, or anything else too? Is it required for anime art as well if I'm just right-clicking -> Save As, or using a batch downloader (you just paste the link of the image gallery and it saves all the files in that gallery). How would I know which is the author's hash to compare to if I would have gotten it from another site/un-trusted mirror if I won't have the original source to compare to? Happens a lot with some of the stuff I hunt down, lots of mirrors and people re-uploading the content because the official servers/source would have gone down. (remember some of the games date back to 1996)

    Thanks, that is actually a very good idea, but I'm quite weary of input lag from that :/ (shmups are twitch-reaction based games). Googled it a bit, most people said it won't affect input lag, as long as the switch has no image processing to do on it's end. I read PIP actually could cause lag.

    Sorry I didn't understand anything from the first paragraph. What would be the benefits of having the networking protocols transparent? I'm guessing mctcp is a program I'll need to install on the server? Just a one time thing to set up, since you're saying 'no application changes and works with everything'? Sorry, for so many questions and being a complete noob on certain things, you give me so many options, I really doubt if I can pull all of this off sometimes. Very different to my usual back up methods (just copy into 2 externals).

    The SFTP part I understood at least, so SSHFS seems like a very good option to implement.

    My client-end/media pc is going to be the ECC desktop, isn't SSHFS for the server/FreeNAS, not Windows?

    Awesome, that's a very good explanation, thanks, so does FUSE come incorporated into SSHFS, or something I'll need to install separately? (please tell me that most of these things I'm going to be installing come with GUIs and not just code lol)

    To be clear, I'm aiming for the ECC desktop to see the NAS as if it is an external drive, if I'm not using Windows' built in file sharing, I'll be using some other program (the third party options) to browse through and organize folders and the like?

    Same thing when we say about using SSH to the NAS, will I still be using the traditional Windows folders I'm used to?

    A bit further above in my post, didn't you tell me I could use SSHFS for the client-end/media pc? (sorry if I'm confusing 2 things together)

    Brrr, don't scare me like that, thankfully I have back ups. Can't wait to have this server in hand so bit-rot won't be an issue to worry about anymore. . .

    At least the video file will continue playing, not be fully corrupt, but that damaged frame would bug me every time I see it lol.

    That's interesting, I've never encountered this (thankfully lol). Yep the USB one was an issue on more than one occasion though. I suppose I should also include the USB 3.0 drivers in the image when I'm doing it, to have easier accessibility to modern keyboards/mice, and in case my motherboard dies and I'll need to get one which only has USB 3 ports.

    That's good, best of both worlds then, simplicity by burning a newly updated disc, or manual control if required for more complicated/troublesome scenarios. Sounds like an amazing program.

    Isn't that a good thing that the data on the NAS will be getting scanned by antivirus while accessed? Is this a feature the NAS has, my gaming rig doesn't auto-scan when I access something :/

    This sounds like a very good idea, especially the read-only/write restricted part. Yeah basically the gaming rig will only need to upload recordings, so one folder is all it needs write access to, then from there I could manage/organize my recordings with the ECC desktop. So if it's on read-only mode I'd still need to generate the hashes? (you said 'another option')

    That's where I'm confused, if the gaming rig only has write-access to one folder on the NAS, can't malware/viruses etc still spread to the rest of the NAS from that folder? The malware would have still managed to make it's way to the drive.

    Thanks for this, I really appreciate the effort and the explanation. You really know your way around this stuff. Appreciate taking the time to download the .bat file, open it and check out it's contents.

    Yes restricting access of the Windows 7 machine does sound like a very good idea, but not just looking at the server's safety here, I'd like to disable the internet on the Windows 7 machine while gaming (without the Meltdown and Spectre patches active so I don't take the performance hit) so it doesn't get compromised too. Wouldn't want someone using my GPU for Bitcoin mining while I'm trying to record gameplay :p

    That's good to know about the control the server offers. Just really hope I could set all this up cause there were some things that seemed quite complex in this post :/ (I've been 2 hours and a half typing this out and researching some terms I didn't/still don't understand)

    Indeed, you're as old as you feel they say, so it's important to keep doing stuff that keeps you feeling young, games help achieve that feeling ;)
     
  19. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Alas, no written rule for this and everyone does it their own way. You may get lucky and you may not, I'd wager not with your 1996 releases unless someone later on posted a hash somewhere of the original disks.

    -Ubuntu has a wiki-page with links to hashes for various official distro downloads: https://help.ubuntu.com/community/UbuntuHashes
    -CentOS has hashes stored in a large text-file on their main builds server [in a folder for each major release version]: https://buildlogs.centos.org/rolling/7/isos/x86_64/sha256sum.txt
    -Microsoft provides hashes (in some cases) for product downloads on their download pages. (though not for all)
    -I post hashes on Guru3D within the main-thread for ClockBlocker and YAP releases.

    Pretty much you just have to dig around for them. If you find em, use em, otherwise there's not much you can do to verify download integrity.

    If any input latency added is unacceptable then yeah definitely go for an electrically "passive" one. You'll get no fancy features of an active switcher, it'll be cheaper, yet in trade for that it will have effectively "zero" latency added.... Just be aware that some devices don't like having monitors unplugged from them. eg, a passive switcher may give you grief with starting a console or a computer with the screen disconnected.

    --In my experience with these though, even the active ones don't really add perceivable delay (since it's usually very low / under 5ms). At least, put another way if you can't feel the input latency of a PlayStation controller on Wireless compared to Wired, then odds are you wouldn't notice the switcher's delay. It's certainly nothing like a television with 60ms of processing delay added. Since they're cheap enough I'd suggest getting one active and one passive to play with really.


    I know, I know, latency all adds up. It's certainly true, it can add up quickly.

    "mptcp" is one of the only implemented totally-transparent protocol-improvement examples that I know of, that you'll find on devices (like Apple Phones) today. Some people might call IPv6 an improvement to TCP, and it is, yet IPv6 requires support from software to use it.

    --The advantage is that you don't need to install anything, the implementation is included in the kernel and its network-stack. Software is not aware of its existence (using mptcp is the same as using TCP), and mptcp only kicks in if the other end (network-stack of the server you're connecting to) also supports mptcp. For mobile devices it can be quite awesome, in that they can then jump back-and-forth between cellular Internet access and WiFi Internet access without an application's connection being dumped. mptcp adds "multi-homing", which is just a fancy way of saying that it gets rid of the concept of a single-endpoint for a TCP-session. mptcp allows a TCP session to have multiple subflows to different IP-addresses, and for these to be actively changed (or transitioned) on the fly. (without an application being disconnected)

    Anyway, when you walk in to your house with your phone, if you're in a VoIP or a Video-Conference call on the phone and your phone connects to your home network, mptcp could handle a handoff to your house-network and save you precious cellular bandwidth (towards your monthly quota). But yeah, back to running a NAS, mptcp probably won't help you there. It's just one of the very very few working, deployed, and implemented examples that can be given of a transparent protocol in-use today.


    **this isn't something you'll use on your NAS**

    SSHFS is for the side that's mounting the server (your clients), whereas your server would grant SFTP access through something like OpenSSHd.
    -- Your FreeNAS box will come with this already installed, so all you need to do for SFTP support is to enable it.

    Yep, think of SSHFS as being built ontop of "FUSE". SSHFS is entirely user-mode, and FUSE contains the kernel-mode glue-code to make it all work under the hood.

    > (please tell me that most of these things I'm going to be installing come with GUIs and not just code lol)
    --All Windows implementations tend to have GUI's (since Windows folk like GUI's, heh), sadly can't say the same for Linux and BSD implementations... Remember that this is only something you'd consider for "clients" and right now you only have Windows machines. Though someday maybe you'll have a Linux media-PC to save some money (without a Windows license).

    There's a couple Windows implementations of SSHFS, some of do have GUI's such as WinSshFs. Another option is NetDrive, which is free for personal use (though also a commercial product) -- NOT built on FUSE for Windows. There's many products out there on the market that can do SFTP, as fortunately SFTP is very commonly used nowadays. [pretty much replaced FTP]

    ^ A good reason to consider something like SFTP (over Windows file sharing) is that you can then fairly-safely mount your home share over the Internet. (when traveling or from work)


    Right, if you're not using Windows' built in file-sharing client, and not using Samba server (on FreeNAS) for sharing. eg, using something like SFTP, then you'll need some third-party / addon client to mount your network shares. Fortunately, these clients do exist though and there's a lot of them.

    No matter what you use for accessing your shares, the end-goal is to have them appear to Windows as mounted drives. Be it by Windows built-in client with Samba, SSHFS, NetDrive, or one of the others, these will all integrate such that you see a new volume that you can browse, delete, copy to and from with Windows Explorer (as if they're just folders on some drive).

    --You can also use programs such as Filezilla or WinSCP of course too. (both can access SFTP and they're both free)

    Yeah, I can relate there, lol. This is one of the reasons that I hate using Laptops and being away from a desktop for a long time. I like having my RAID arrays, and I like having my ECC memory, and I love the peace of mind that if something goes wrong I'll at least know.

    --The thought of "silent killers" is horrible, both in human health and PC health where there's nothing you can do to check "is everything *OK* right-now?".

    FreeNAS I believe has a plugin to scan with ClamAV, Sadly I wouldn't put much faith in ClamAV's detection rates for Windows malware.

    Most Windows AV's can be configured to auto-scan Network Drives just like local drives, they should at least on the first access to the file each-run. If you're using something like SSHFS, they should treat the disk the same as a normal disk-drive (not even be aware that it's a network drive), though it'd be a good idea to make sure you toggle any Network scanning features ON in your AV's too [for Windows file-shares].


    Malware spreading from a Windows machine is most likely going to be Windows targeted. It's unlikely it'll even run on FreeBSD if it's placed on a network share. (That is to say, even if you were to try to run it from FreeNAS). FreeNAS on its own won't execute transferred files though either. So, in either case the files are just sitting there inactive, even if they contain malicious code.

    --What that means is that the greatest risk is you pulling those files from the network share "to" other Windows machines.

    Well, you're going to have one machine (at least) that has write access to the drives of the NAS. Obviously securing the disk is the best solution where infected machines can't spread their infection to your clean files.. At the same time presumably you'll need to access the NAS from time to time and to store new games, movies, etc. [so it may inevitably happen once or twice]

    Everything is fallible, so it's of course possible that "whoops" the machine accessing your NAS might be compromised and do stuff that it shouldn't. Hashes are always useful since without them it's hard to do manual spot checks when you suspect foul-play (an infection).

    You'll get there, trust me. Everything that you lookup now is just a step closer in understanding all that terminology, and that'll make it much simpler when you move on to building and deploying everything.
     
    Last edited: Mar 26, 2018
  20. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Thanks for the information and the links with hashes, I'll keep an eye out for these when browsing and downloading new stuff.

    So, from one of the sites I save the anime art, names come up like this for example: 'suzuya_kantai_collection_drawn_by_gin_ichi_akacia__2c35d9e70306b3696fed52eef483e259.png'. Is the 2c35d9e70306b3696fed52eef483e259 it's hash incorporated into the file name?

    'Pretty much you just have to dig around for them. If you find em, use em, otherwise there's not much you can do to verify download integrity.' Not much you can do except for verifying yourself by manually viewing the file and making sure it's intact, correct?

    Thanks for the info, I think I figured out why the colours look different on the non-PC HDMI ports. The PC-HDMI port (also known as HDMI/DVI) uses the Full colour range from 0-255, which is what a monitor uses. The non-PC HDMI ports use the Limited colour range from 16-235. Each has it's own uses, basically what you'd be doing on a pc (gaming, art, desktop applications etc) you'd want the Full 0-255 range, for movies and anime the Limited 16-235 range is recommended since they're produced this way for TV: https://referencehometheater.com/2014/commentary/rgb-full-vs-limited/
    I did some further testing on this, and it's true. I used different things like anime art, gaming, gaming recordings, anime, movies, and desktop applications to view the difference while doing a comparison switching HDMI ports. Viewing movies and especially anime on the PC HDMI port looks much more washed out and the blacks look grayer rather than how it does on the non-PC HDMI port. Alternatively though, art and gameplay recordings look much better on the PC HDMI port, and look too bright/colours bleed too much when on the non-PC HDMI port. Input lag in games is also dramatically reduced on the PC HDMI port. I guess everything has a specific use. After doing all this testing I reckon I don't need the HDMI switch, and it's a better option to do everything gaming related (that is gaming and viewing gaming recordings) on the gaming rig (this is connected to the PC HDMI port of course for less input lag and the Full 0-255 range), and watching anime and movies on the ECC desktop (which will be connected to the non-PC HDMI port to get the Limited colour range as suggested). The only con in this setup is that I was going to use the ECC desktop as my main desktop (so apart from watching anime and movies, also saving art, working on spreadsheets etc) which is where the Full range (PC HDMI port) is recommended. Argh, everything always with a trade-off. Sorry for the length, I know this might not exactly be your field since it's not data related.

    Thanks that's quite interesting and shows it's usefulness. What do you mean I won't need to install anything, the page you linked me to had stable release v0.94, wouldn't I need to download and install that to get mptcp working?

    By 'mptcp only kicks in if the other end (network-stack of the server you're connecting to) also supports mptcp', I'll have to install this on my Windows machines also then correct? I'm assuming I'll need a different version of mptcp for the Windows machines, the one you linked me to seems to be for Linux.

    Ok I think I must have misunderstood somewhere down the line, aren't my clients going to be my gaming rig and the ECC desktop? Since SSHFS is Unix-based, wouldn't I need the Windows ones you suggested (WinSshFs and NetDrive) for these, and SSHFS on the server since that's the one that's going to be Unix based?

    By OpenSSHd that's a typo and you mean OpenSSH right? (I can't find OpenSSHd, only Open SSH)

    That's awesome, about SSHFS and FUSE being incorporated as one.

    Hold on, clients could only be Linux machines? (this might clear up my confusion on clients which I asked above).

    Is only NetDrive not built on FUSE or also WinSshFs? It sounds like a good idea to have one built on FUSE right? Will these replace the windows explorer/manager that I'm used to (and all viewing of folders etc will have to be done through them), or is it just a one time thing I set up and from there it just works in the background?

    I would like to have SFTP between my server, gaming rig, and ECC desktop, but believe me, I would never dream of mounting it over the internet as you're stating. Thank you for letting me know there are options though, I appreciate it, but I'm more of a secluded, less-machines-that-access-the-server-the-better kind of guy. I'm very paranoid about this stuff, you just get a glimpse of it on my posts haha. Lol besides, I don't think it's a good idea having access to my server from work, how will I ever get my work done? :p

    These third party clients are WinSshFs and NetDrive? Same as above I guess, will I just need to use them for mounting (i.e. a one time thing I set up), or will it change the way I'm used to using Windows (not the usual Windows explorer I know and use)?

    This sounds like it could answer some of my questions from above (i.e. accessing the files through Windows normally), but I'm still quite confused now (this got complex really fast over the last couple of posts).

    This continued adding to the confusion, FileZilla or WinSCP are replacements for WinSshFs or NetDrive? (can't WinSshFS and NetDrive access SFTP too, so aren't these 4 programs doing the same thing?)

    I could imagine that I'll be the same once I get my server haha xD RAID, ZFS, ECC are features I really respect now from what you've thought me :) Thanks so much.

    So this means I'll be running virus scans from the ECC desktop scanning the server then, not using ClamAV or an antivirus on FreeNAS to do routine virus scans? Is this the best option, or are there better alternatives to ClamAV?

    Ahh ok, they can be configured to auto-scan, not set by default, that's why mine don't auto scan then :)

    'good idea to make sure you toggle any Network scanning features ON in your AV's too [for Windows file-shares]', is this only if using basic Window's file sharing, or also if using the SFTP programs you linked (WinSshFs, NetDrive)

    That is very good info to know, and quite reassuring. So basically I could get a virus that could ruin the Windows machines, but leave the server untouched? (that would be awesome if so, because Windows could always be re-installed/restored from image back up, but the data is still safe on the server).

    '--What that means is that the greatest risk is you pulling those files from the network share "to" other Windows machines.' So similar to above, an infected file will be sitting in the server, it's not doing anything wrong to the data stored on the server, but then if I open it on one of the Windows machines only they will get effected? (while leaving the server untouched?) Am I missing something here, or is that not much of a threat? Remember all data will be on the server, the Windows machines could easily be restored again, what's important is that the data on the server stays safe.

    Hmm got it, so it still makes senses generating the hashes periodically then. I assume there's some program I could run that will scan all my drives and generate hashes as a batch? Hash generating will be done on FreeNAS or on the ECC desktop?

    Yes I'm hoping so. It seems like a flood of information for now, but I'm sure it will all fall into place when I'm implementing and building everything :) Thanks again for all your time and lengthy replies, I really appreciate it. Take care and have a great weekend ;)
     

Share This Page