How much free space to leave on an external HDD used for storage? + Corruption multiple backups?

Discussion in 'SSD and HDD storage' started by 321Boom, Nov 21, 2017.

  1. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Yes I assume the server wouldn't use up that much wattage, it's only storing data after all. So D3M1G0D's problem is that the UPS wasn't fast enough to kick in 700W back to the system basically, but for 100W it would have been fine?

    Yes if I could attach a display to it it will be very helpful. I'll have to go more in depth about the differences in remote-KVM and IGP once I actually come to building it :) How much do you think it will cost btw? Just to get a rough idea here, I was thinking maybe around 1500 euros?

    That's awesome about FreeNAS, I'll definitely have to set it up. I'll have to talk to you again once I come round to building it for the Intel NIC's. If it ensures higher compatibility and safety of data then I'll definitely go for that.

    I only play shmups (games never exceed 5GB in size since the games are graphically limited), so I play them off of the SSD (for faster load times, added performance), and apart from the 4TB storage drive I have a dedicated 3TB hdd solely for recording onto. What I'm getting at is, if my primary place for the storage of my files is going to be in the server, aren't all my games (that are currently in my 4TB storage drive) going to be in there? Why would I need the 4TB storage drive anymore then? (If it's in the gaming rig it's non-ECC storage)

    So you mean there is only one write which is happening that is non-ECC. I guess this is safe yeah? I mean, I've been using non-ECC based systems since forever (I've just learnt about ECC from you), and nothing ever corrupted for me from just saving it, usually it happens over time.

    So even if my gaming rig crashes while I'm reading something off of the server (like watching a video for example, or updating a document), the file in the server is still 100% safe, even though the gaming rig viewing it is non-ECC?

    Got it, but isn't the file still passing through the non-ECC channels again to save the changes? I've got documents I update every day, so wouldn't that be quite a high risk since I'm doing it daily? Why would sitting in a machine for months or years make a difference? Isn't ECC useful while the file is moving through the channels? If it's just sitting there shouldn't it be safe since it's not being accessed or moved?

    That's not good for me, I have lots of stuff saved in Japanese kanji (I'm quite an otaku), and lots of text files with the same names but in case sensitive :/

    I'd be a bit weary of that (scared that one program might 'delete files' as seeing them differently from how the other program sorted them :/ Why would I need to use both if both serve the same purpose?

    Sorry I know I'm gonna sound like a dufus but I have no idea what those (syntax (re-read) and bash-script) are.

    Oh I'll be careful don't worry haha, you have no idea how paranoid I am while moving my data around and backing up xD The amount of double checking to make sure everything moved safely across is beyond belief lol.

    That's good, thanks. Just asked because DPC latency can increase from simply having some extra background processes running. Yeah I won't be playing games off of the server, I'll move those to the SSD in the gaming rig :)

    Well I've never built a machine from scratch (the hardware part scares me in case something breaks during installation doing irreparable damage to some 350 euro processor lol, I wanna do it someday to learn though), but software-wise I catch on quite fast. Yes I know how to install Windows, and am versatile with a few third party programs which I use (recording software, sticking English patches on to Japanese games, anything to make my gaming life more enjoyable I guess).The server has to be in Linux? It can't be Windows based? What are the advantages to Linux over Windows?

    Thanks, but I know I ask LOADS of questions, so I must be quite a headache to reply to x_X Yeah first time for a server, if you remember this thread actually started with asking about how much free space to leave in an external hdd lol. 'Servers aren't scary', far from it, they're an impressive machine meant to keep your data safe, if anything they're a blessing haha xD Yeah been using pcs since I was 8 years old! :D Well, mostly for gaming lol, but got quite a collection of anime art and rare gaming soundtracks which are now irreplaceable if lost (some even date back to 1996!)

    That always on inverter actually sounds like a good idea haha xD
     
  2. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    You'd almost certainly be fine if you only had 100 watts of draw (we've fairly frequent power failures here -- ~5-10 times a year, and I've yet to see one of my servers be taken out un-clean from a power-failure. The more current that you draw (and or smaller the PSU proportionally), the less time the PSU is going to hold charge for when power is cut.

    --eg, the faster the supply would need the UPS to switch over.

    Price, not as bad as you'd expect. In the US, around 300-500$ for the server itself, then the rest is your disks (so this just depends on how much storage and redundancy you put in there). You can figure that you'll need 3 or more disks in the machine (on the low side), more if you want to go for RAID10 (potentially with three-way mirroring) or RAID6.

    I would personally keep your OS partition "separate" from the data partition (two small drives for RAID1 for the Server's OS). The idea of this is that Software-RAID1 (mirroring) is bootable, and this gives you more choices for RAID modes on the storage disks.


    For a storage server the main use of RAM is going to be disk-cache, given it doesn't run much other than the web-interface, monitoring tools, and server software. You can get away with 2-4GB of RAM and get quite acceptable performance.

    (though more RAM will help with repeated access to files OR large sustained writes)


    Note: The server doesn't have to be bleeding edge, it doesn't have to be that fast, and can be older generation machine such as one using DDR3. This cuts cost alot.

    You can disable hardware offloading on alot of other NICs to get them working, but everything (all features) are pretty well guaranteed to work on Intel NICs out of the box (it's a BSD thing mainly here) with no hassle.

    Far as all people saying FreeBSD isn't Linux, the more you dig in to this, the more you'll see it's true with hardware compatibility. Apples and Oranges. BSD being far more picky with networking.

    Recording your videos directly to the NAS is possible, but may saturate your NIC and potentially cause lag during online gaming.

    -It indeed IS possible. But if you really want to do that, go for something like "RAID10", striping + mirroring on the storage server AND get a dedicated Network-Card exclusively for use to the storage server.

    For games that aren't access time sensitive, save games, etc, SURE, you can put those on the NAS. Infact, I would highly suggest symlinking your save-games folder to there (to help in not losing your saves on failure).

    Exactly.

    There's no such thing as 0% risk, so it's best to think of this as "reducing" the probability of corruption. To drive it down lower, you'd need ECC in your desktop that's accessing the network-volume.

    Best bet is to take a read of the wikipedia page here. It lists the general attributes and rules of the most common filesystems for naming.
    --You're fine though, if you're able to store your files on a Windows FS like you currently are, then those same files can be copied to a unix style FS without issue.

    Bash is the unix world equivalent to Batch. eg, you can store a command in a bash-script and then run the bash script rather than typing out the command.

    By re-reading the syntax, I mean things like the source and destination in your rsync command. aka, you want to make super-sure that you're doing a mirror operation in the correct direction. (not restoring the backup from what you intended to be the destination)

    You don't need to use both. Though yeah, the biggest thing to be careful of is source and destination "direction" like mentioned above.

    EDIT: To be clear, deleting of files is because of direction.

    For example, say that I have a new folder, named "new", and an old folder named "old".
    ---I run rsync from old to new, any files that I created since the last time within "new" will be deleted, given they're not in old. This is what you need to be cautious about, since there's basically no confirmation prompts.

    rsync can be as dangerous as "rm -rf" (rm = remove / delete, "-rf" = recursive, force [no confirmation prompt]). rm -rf * is 'safe' if you are 100% sure what folder you're in, but this is really-really-really-BAD PRACTICE, as you could be at the root of the drive ---in which case it may delete every file on the disk.

    ** Similarly mirroring an empty folder to the root folder would achieve the same effect (deleting everything).

    ^ You can lower the chance of doing things bad by always using absolute (full) paths, proof-reading, and saving commands once doing a backup to reduce human-error. The same applies to most tools of this nature by the way, any large scale recursive file operation should be done "with-caution" (until you've saved the command to a script).


    It's like playing adult legos, literally. Actually quite hard to damage any components, they're designed with what I'd call the pokey nokey principle. If you plug something in the wrong way, it generally won't go in. So, as long as you don't force it in, nothing bad can possibly happen.

    You just take things slow, look at some video guides, and don't apply excessive force --and you'll be fine. :)
    --Or essentially if something feels like it's really hard to plug in, most of the time that means it doesn't go in that way.

    Advantage of Linux and FreeBSD is cost (eg, FREE), customized distros just intended to be storage servers (you won't find anything as easy to use going for a Windows file server, believe it or not), and better Software RAID support.

    --Windows Soft-RAID is very primitive compared to Linux & BSD.

    Electric bill doesn't think so sadly, LOL. You burn up extra power in the inverter leaving it on, and add close to another 20% to your power-draw and room heat.


    EDIT: Decided to see if anyone had benchmarked NAS loading of games and did a quick check on YouTube & Google. Turns out this has been tested, surprisingly. (I'd not have expected most gamers to be that interested in it)

    --That said, I'd still note that despite the reviewer's comment of no noticeable impact on gaming that I would expect lag-spikes during disk-read unless a dedicated NIC is used. For instance, I'd be cautious of that for MMORPG's (constant disk access and loading), thanks to network choke (unless you're on 10gbit + a very strong server). For FPS's, you're just not going to be able to escape the edge of "fast loading" of a map in a competitive game. (few second edge when the map starts)
     
    Last edited: Dec 5, 2017
  3. tsunami231

    tsunami231 Ancient Guru

    Messages:
    14,751
    Likes Received:
    1,868
    GPU:
    EVGA 1070Ti Black
    ditto on that synctoy , That could come in handy when I do my backup of my HDD to external HDD, instead of copy whole thing over i can just let copy off newer files
     
  4. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    That's good to know. I was thinking of getting a CyberPower UPS with Pure Sinewave (https://www.amazon.com/CyberPower-CP1500PFCLCD-Sinewave-Outlets-Mini-Tower/dp/B00429N19W). I want to get one for my gaming rig too (gaming rig is 750W, currently don't have a UPS for it x_X). Will this model be good for the server as well? What recommended VA/Wattage? I've read that a server can perform a clean shutdown on it's own once it switches to UPS when the power goes out.

    Got it, so I'll take 500 as a figure to be on the safe side (so in the 500 there will be the case, the motherboard which supports ECC, the ECC RAM, and a processor which will also support ECC)? (It sounds too cheap to me to be honest, my processor in my gaming rig was 350 euros alone). So then need to add a low end graphics card, and hdds. 3 hdds seem very much on the low side, if I'm going for it, I want a good reliable system that will serve me for years. Would it be safe to use 8TB drives, or is it better having smaller ones? I've read some bad things about RAID6 (http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/):
    • More latent errors. Enterprise arrays employ background disk-scrubbing to find and correct disk errors before they bite. But as disk capapcities increase scrubbing takes longer. In a large array a disk might go for months between scrubs, meaning more errors on rebuild.
    • 'Simplifying: bigger drives = longer rebuilds + more latent errors -> greater chance of RAID 6 failure.'
    RAID10 sounds very good from a little research I did, but that must end up costing quite a decent amount of money if I understood how it works correctly. Example: If the server I would build in RAID6 would be with 6 drives of 4TB each, the RAID10 equivalent in storage would be with 12 drives of 4TB since it mirrors?

    Yes the OS being separate on RAID1 sounds like a very good idea. I like the idea of the OS being mirrored on a separate drive.

    Well RAM is quite cheap (don't know if ECC RAM is a lot more pricey than standard RAM though), so if it will increase performance I wouldn't mind adding onto the 2 - 4GB you suggested. (This again seems on the low side to me, but then I've never owned a server :p)

    Intel NIC's it is then. If it'll give less hassle and less issues.

    So I guess we'll be going with Linux then not FreeBSD? Is FreeNAS a free OS like Linux, or a program which we will install onto Linux?

    I think you misunderstood my question here. I didn't say I wanted to get rid of the recording drive in my gaming rig, I meant if the storage drive I have in the gaming rig will end up being redundant since all the games will be on the server (I always move what I want to play to the SSD anyway, so the storage drive is literally, just storage).

    I'd prefer to record locally on the gaming rig rather than onto the NAS to be honest. I believe (correct me if I'm wrong) recording over the network could add issues or lag, and strain the hdds in the server more (I record 1080p 60fps).

    The games I play don't even have or create a save-games folder. It's just a score file (like 10KB or less) in the game's root folder :/

    That's good to know, so will this apply even to names that look like gibberish (extracting a Japanese file while not in Japanese locale yields this result), looks like this: https://learnmmd.com/Applocale7zip/AppLocale_1.kjuv.png
    Also, the server while work fine if I'm accessing it through my gaming rig while on Japanese locale yeah? (this helps a lot to solve compatibility issues with the games). I'm asking this because some stuff changes on Japanese locale (like the / in the folder directory become a ¥ instead) (example: C:/Program Files/Common Files will be C:¥Program Files¥Common Files)

    A bit more understandable :)

    Oh believe me I'd definitely want to be super-sure about that! x_X

    That doesn't sound safe at all :/ Does Robocopy have this same issue?

    Does it all have to be done by commands? At work we've got a server that we save onto, it's simply opening a folder (shortcut) on the desktop which leads into the networked volume, and just 'copy and paste' into it.

    That's good to know, I'll be extra cautious :D Yeah definitely video guides.

    If it's better, then I'll opt for it. Thanks

    Haha I could imagine, but it's worth it for the peace of mind of knowing you've got your equipment safe I guess :)

    I would expect lag spikes too, I really wouldn't suggest it to anyone (playing games off of the NAS). I mean, I move them to the SSD for added performance, I wouldn't want them getting choked due to some spike in the internet connection! x_X

    So apart from the gaming rig (which I use solely for games, and gaming related stuff), I also have a laptop (which I use for everyday stuff, like family photos), I assume I can link both the gaming rig, and the laptop to the server and move the stuff from both of them into the server on 2 separate folders? (example, one called Gaming, the other called General). There won't be any issues since gaming rig is Windows 7 with Japanese locale (need it for games compatibility), laptop on Windows 10 with English locale, and server being on Linux?

    Thanks again for all your time in this! Took me an hour and a half replying, so I imagine it must be similar for you!! x_X
     

  5. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Usual rule of thumb that I follow is to double up the power requirements on the UPS. This is because the uptime of UPS's is generally not too good at their full load (somewhere around 5-6 minutes of battery life at high draw, 10+ at 50%).

    If you're doing some work on a machine and have to abruptly stop, you probably need a good 5 minutes to stop what you're doing (save everything), and up to 5 minutes after that to get the computer safely shut down (eg, closing programs). For your storage server, it should probably have a runtime greater than your desktop since your desktop will probably be using it at the time. If you're in to MMORPG's, you might want to get yourself even more battery time so that you can try to ride out the power outage or safely log-off [and notify people] if you're "in the middle of a raid".

    -- The NAS will be easy because it doesn't consume much power. Your main desktop may need a bigger UPS. Main thing that I can say is to get yourself a kilowatt meter and measure what your average power-draw is (if you're often gaming, then measure gaming load), after that lookup the UPS runtimes at different load levels -- so that you can get a feel of how long it'll last you.


    You'll find that Dell prebuilt PowerEdge servers with Xeon processors actually start in the 500-600$ price range (without adding disks). If you take a look at these machines it may give you an idea of what type of hardware to go for, and it should cost less building it yourself. Yet in general, I'd say that a storage server doesn't need a super-fast processor, a lower end Xeon E3 (or an AMD) will serve the role just fine.

    These chips (the Xeons) will come with Intel HD Graphics, so you won't need to buy a videocard.

    -Processor price is mainly an aspect of speed and only starts going up alot at the higher end, a slower server chip can honestly be just as reliable and much more economical.

    Yep, 3 HDD's is the absolute minimum you can get away with in my opinion. (which would be basically RAID 1, with a spare pre-purchased for when it fails)

    On HDD size, as you've discovered, RAID verification can take a VERY long time depending the capacity of the disks. In the case of 2TB HDD's in a RAID 10 array (7200RPM, 4 disks), it'll take on average around 6-8 hours to complete the verify (depending what is accessing the disk other than the verify). Every 2 TB that you add to each disk you'll be increasing that time (you can just multiply the hours for a rough estimate). Personally I prefer my RAID checks to complete in less time than it takes me to sleep, so that I can put this on a schedule.

    So, I don't like using drives larger than 2TB each in any array. Bear in mind that you can easily add 12+ disks to a machine with software RAID, it may just require that you buy an addon disk-controller to use in addition to the motherboard controller. However, this is one of the big advantages of software RAID over hardware RAID, that it's easy to expand to massive capacities.

    As a to save yourself pain down the road method, I strongly suggest that in any machine with lots of disks that you "label" the drive with a sticky (physically stick some label on the disk) and also stick a label on the end of each connector. This will make things ALOT less painful when a drive fails and you have to identify which it is for replacement.


    In my experience in the business world, people seem to consider anything less than a day for verify and rebuild-time as acceptable.

    **Airflow consideration**: Any computer with more than two mechanical disks stacked ontop of eachother and in a mirrored RAID, make sure they have a fan blowing over them. Just any airflow and they'll be fine, but stale air around stacked-disks that are thrashing for 6+ hours will cook them otherwise.
    (or minimally may cause IO errors from heat which mess up the integrity check, HDD's don't like getting hot)

    RAID 0 is striping. -- splitting data across two or more disks -- [increases read and write speeds, capacity is additive of the drives involved]
    RAID 1 is mirroring. -- keeping at least one additional copy of the data -- [increases read speeds, and you lose half capacity in a 2-way mirror]
    RAID10 is combining RAID1 and RAID0. Or basically speed + strong redundancy.

    Anyway, you would need "8 4TB drives" in RAID 10 to reach 16TB capacity (two-way).

    -4 x 4TB -- striped = 16TB
    -Now you need to 'mirror' those 16TB, which takes at-least another 4 disks.

    If you want a second-mirror (three-way striped mirroring), then yep you'd need 12 disks (another four).

    Shouldn't cause any problems, and UNICODE filenames and paths shouldn't be an issue for BSD, Linux, or Samba server.


    robocopy and rsync both have the same destructive potential. They're power-user tools for sure.

    There are some GUI's for both of them, but the tools themselves are command-line based. When mounted like a network drive you don't have to use these of course, the suggestion of them more came up for how you can sync files to external drives (the USB HDD's).

    --Windows Drag and drop will work fine, just it's undesirable speed wise for tons of files. (talking if you need to sync a folder with hundreds of thousands of files and only transfer new files, changes, deletions)

    Ah, so you copy games out of storage to play them on to the SSD. This will work to protect those games and avoid redownloading them.

    Performance wise: Yep, that's what I was getting at too with loading games off the NAS. Recording over the network (to the NAS) almost guaranteed would cause lag, but this could be eliminated by adding a second network-card to your gaming rig, and then using that NIC for access to the NAS. (aka, dedicated NIC for NAS access, dedicated NIC for game-access)

    Unfortunately even in this case, the recording could be fighting with loading the games (if everything was off / to the NAS). All in all it's best to just separate IO to avoid chokes. Gaming is serious business as you know, with the saying. Haha.

    -- Far as single score files in the root of the folder. No-problem, you can symlink single files instead of a full directory junction. This should still work.

    FreeNAS is FreeBSD based.

    Far as what it is, you'd call it basically a distribution. It's a pre-customized storage solution (bundle of services) designed to work like a NAS appliance.
    --You install FreeNAS on a machine, and FreeNAS includes FreeBSD + everything you need.

    Not a problem, I figure that other people reading this may be considering RAIDs, concerned about data-loss, and maybe they're not quite committed to moving to a workstation just yet. (lots of overclockers out there)

    Loss of data is sadly not taken serious too often, and people only start to worry about it after they've lost everything a few times (despite that our entire lives are basically on computers now). I've gone through that as a programmer with projects, and so I've kindof learned to appreciate efforts to keep data safe. (RAID, backups, cloud storage, etc) When I see someone else that wants to prevent catastrophic failure and avoid going through all the crap involved in the aftermath -- losing weeks, months, or years of work, it's honestly worth the time to have a chat like this.

    Prevention takes time, effort, and money, but in the longrun it's really worth it when you start to realize how much losing anything costs.



    And yeah, you can create multiple folders and mount points. You could create private storage space per machine if you have a family that you need to protect data for each person with.
     
    Last edited: Dec 6, 2017
  6. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Thanks, I'll keep that in mind when it comes to buying the UPSs.

    That's a very good price, all that comes with ECC though?

    Won't some servers not post though if they don't have a videocard installed (as stated earlier in the thread?) I don't have a spare graphics card running around, so I'd need one to get past the boot screen.

    What do you mean by 'put this on a schedule' regarding the RAID checks? Are these something I will have to schedule myself, or will the server be doing them on it's own?

    Sorry if it sounds like a stupid question, but wouldn't it take the same amount of time to finish a RAID check on 4 x 4TB drives, and 8 x 2TB drives? It still has to verify the same amount of data.

    I'm just saying because building an array with 4TB drives rather than 2TB ones would cut hard drive failure rate by half (less hdds mean less could fail), heat, physical space they'll be taking up in the array, and increased airflow. Also, building a 12 disk array with 2 TB drives in RAID6, I'll only be getting around 18TB - 20TB am I right? Seems a bit on the low side to be honest :/ I have around 8TB of data already.

    Oh btw, will I need to leave around 10% free space in the server, like you'd do with any normal hdd? I assume so for it to be able to have space to move files around and such.

    Yeah this is something that also worries me as stated above with having loads of smaller 2TB drives in the array :/ By fan over them, you mean the ghetto method of the side panel open with a stand/desk fan blowing on them, or case fans?

    Why just 'thrashing for 6+ hours'? Won't the server be on all the time? :/

    I'm guessing the 3 way mirroring offered by RAID10 outshines RAID6 in safety of data rather than having the 2 parity blocks RAID6 offers?

    Good to know :) Even the gibberish I liked an image to in my previous post will be fine?

    I don't think the Windows drag and drop method would affect me negatively that much, because I won't be needing to sync hundreds of files if everything is getting saved to the server directly. If the default download location will be set as the server (so example a folder called Downloads on the server (not in my gaming rig)), everything new I save will just go in there, and I just move it to it's corresponding folder for organized filing. There won't be any need for lots of syncing, only when I'm taking a back up of the server to the 2 external drives method I mentioned earlier in the thread. Please correct me if I'm wrong, remember I'm just thinking of how this will work, I haven't tried it.

    So you mean, yeah the 4TB storage drive in my gaming rig will not be required since everything is going in the server directly?

    Yeah definitely, don't want any lags, neither in the game, nor in the recorded footage. As you said, gaming is serious business haha xD

    I still don't get how this will work, if the game needs to read the score file from the root folder, how can I put the score file in another folder on the server, and the game just reads the shortcut/symlink?

    So I'll be installing FreeNAS then, not Linux? Will virus scans and the like still have to be run on the server, or it's a whole different operating system?

    That's very nice of you :) Hoping all these posts could help some other users out there too!

    That's good to know, thanks :)

    So, a question about that disabling write caching mentioned earlier. Should this be applied to all my drives? Or just the externals I use for back ups? I don't think it will be of benefit to the SSD and the recording drive in the gaming rig :/ What about for the storage drive in the gaming rig and my general laptop?

    So, I guess for full safety measures, I'm best off building an ECC desktop as well apart from the server, and using that to do all my downloading and saving, directly to the server? And just use the gaming rig to play and only read/view the files? (rather than save stuff through my gaming rig which is non-ECC, and my general use laptop, which I highly doubt is ECC too). (Then again, if I build an ECC desktop, and save something directly into it, there won't be any worries between copying files to the server from it later, is there? (since they will both be ECC) Or is it always safer saving directly to the server)
     
    Last edited: Dec 6, 2017
  7. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    This is VERY good advice btw, thank you. I'm sure it will be helpful in the future.
     
  8. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    If a server has integrated graphics, it'll generally post ... machines often seem to dislike having no-videocard and no-IGP though. (example would be like a Xeon E3 1270 vs the 1275 -- they're otherwise identical chips but the 1270 has no IGP)

    When HDD's break down, it's sometimes abrupt (total disk failure), but more often the drive just starts slowly having surface degrade (bad sectors, unrecoverable read errors, etc). The idea is that your data is scattered all over the disks' surfaces, and if you don't check each disk routinely (say, every week or two) then you simply won't know if a drive is fully readable.

    It's dangerous to not know if a disk is readable, because like in the case of two drives -- consider the scenario of disks A and B in RAID1:
    ** "A" -- Has some bad sectors appear, parts of the disk are failing, but data is in general readable. Some of our data is NOT readable (we haven't read this data in ages). As we've not 'tried' to read ALL data on the disk, the RAID system has not detected this problem yet.
    ** "B" -- Some number of weeks later after drive A has started to degrade, disk B dies (completely). The RAID notifies us of this failure, and so we try to "repair" the array and replace disk B.

    --Unfortunately, as drive A has failed as well (which we were not aware of), our RAID can no longer be rebuilt and we've lost some of our data.

    ^ Routine RAID integrity checks could've avoided this. Yes, it's something you'll have to configure, but almost all RAID systems have such an option and it's easy to do (you just choose a time for it to be run). FreeNAS is no exception here.

    Verifying an array is in the ideal scenario the time that it takes to read back an "individual disk" in the array (there can be things like interface-IO chokes that slow you down, or poor implementations on some BIOS RAIDs). All disks in the array are read concurrently during verification. So the more disks, the greater the speed in reading back data from the array.

    Bigger disks is a tradeoff, since you get more space, and need less drives in the array. However, in trade for that, your verification and rebuild times go up immensely.

    Per free space: You will need to leave some space on your volumes like on a normal drive would need, so that the OS (FreeBSD) has space to work with in order to deal with fragmentation (keep it down in writing new files).

    When buying a case, you'd look for a chassis that has fan mounts over the drive-cage. It's a fairly common feature since alot of them put the drive-cage directly behind the front intake. If you mount HDD's in a 4x4 bay where the CDRom typically would go up top, then you'd similarly want to buy some cage that has a spot for a fan infront of it.

    The server will be on all the time, yet the verify time is roughly going to be the same time that it takes to "repair" the array if a disk fails as well. During the time that you're rebuilding from a disk failure, you really don't want more disks failing.

    -- For me, the point of having it done during my sleep is so that I don't deal with slow-speeds (caused by competing with the verification) during the day.

    The nice part about RAID1 and its variants is that they're simple. You have complete copies of your data rather than reconstruction from parity, and that makes it easier to believe there's no implementation bugs that'll prevent you from building or accessing the array.

    Three-way mirroring allows for voting-logic (eg, 2/3 say this is what my data is) to fix read-errors that aren't unrecoverable, where data is just stored wrong for some other reason on a drive and the drive's checks have failed. (drive reports data intact, but it isn't -- this is fairly rare in practice)

    Yep, even gibberish, symbols, and other things that Windows won't accept are usually fine with unix naming.

    Right, this is something you'll probably just use for backups of the server machine to externals.

    Here's an example of a single file symlink between folders A and B. Imagine that "B" is on your NAS, and "A" is the local game folder on the SSD.

    Code:
    C:\Windows\system32>cd C:\Users\x\Desktop\A
    C:\Users\x\Desktop\A>mklink Hello.txt C:\Users\x\Desktop\B\Hello.txt
    symbolic link created for Hello.txt <<===>> C:\Users\x\Desktop\B\Hello.txt
    C:\Users\x\Desktop\A>echo Hello World!~ > Hello.txt
    --File "B\Hello.txt" now contains "Hello World!~".

    You'd be installing FreeNAS, so FreeBSD effectively.

    Malware won't be executed on your NAS, and your NAS will probably be immune to it (being that most malware targets Windows). However, you should still scan the files that are stored to the NAS, and the NAS' contents can still be infected due to folders being mounted on Windows machines. (eg, a virus run on Windows could infect the network-share, and other Windows machines accessing it could read these infected files)

    I'd personally leave it on for disks in the array, and turn it off for the externals that you're plugging in just for backups of the NAS.

    If you build an ECC desktop and have a RAID on the desktop, your desktop would be fairly reliable just like the NAS. This is all something you'll have to think through of how far (how anal) you want to go in protecting files. Laptops sadly usually don't have ECC, though there's Xeon laptops that have ECC (these are very expensive).

    Hopefully at some point in the future AMD can give us a cheaper workstation option with ECC (some "pro" Ryzen mobile version perhaps), so that we can finally have affordable, reliable, work laptops suitable for RAIDs while traveling.
     
  9. D3M1G0D

    D3M1G0D Guest

    Messages:
    2,068
    Likes Received:
    1,341
    GPU:
    2 x GeForce 1080 Ti
    Why not just use your server directly to download/copy files? My Threadripper system is built to be a full-time computing machine, but I also use it to browser the web and watch videos. It sits alongside my gaming PC and I control it through Synergy (app for sharing mouse and keyboard across multiple computers), and I control all my other machines through Chrome Remote Desktop. Having a separate ECC desktop apart from the server seems kind of redundant, IMO.

    * FYI, the Threadripper/X399 platform supports ECC memory (unbuffered), although I'm currently not using them. X399 also has a lot of lanes and could be used for a home workstation/server. If I ever need a data server in my home, my TR system would be it.
     
  10. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    For me, the reason I have an ECC backup server and also an ECC desktop is dedicated-roles.

    While arguably I could have a single machine that I use for storage, and also run KVM + PCI-E passthrough + Windows under a virtual machine on it, this would result in my attempting to shove a videocard and other high end parts in my file-server (which has 24 HDD's and 2 SSD's). I'd say that the main incentive is purely simplicity and separation of roles rather than attempting to aggregate everything on a single machine.

    For instance, I don't want to use Windows for my file server nor do I want to run FreeBSD or Linux under virtualization, yet I also need a Windows dev machine with my code directly on it per getting low compile-times. One thing that's always bothered me is debugging crashes during long-term execution (testing), and finding that there actually wasn't a problem, yet it was caused by some other hardware failure (eg, bad RAM / instability).
    -- after seeing this happen twice and losing days to debugging rare crashes (in products intended for 24/7 use) that didn't exist, I stopped using non-ECC for real-work.


    However, yeah, you're right in that I reckon he can just get wget or curl installed on the NAS, SSH in to it, and perform the download there. There may even be a plugin download manager for FreeNAS, yet I haven't looked in to that before. Synergy will be tricky unless a GUI is installed. FreeNAS (and all similar distros) out of the box are intended to be GUI-less as a dedicated file-server. Other problem may be putting the storage server close by, eg 24 HDD's chugging is pretty loud and I had to move it to another room.

    EDIT: To be clear not trying to knock virtualization, it has its place. There's other issues though, like physical room to work in. File servers are just too many wires, disks, and controller cards. all of the space is occupied already.
     
    Last edited: Dec 6, 2017

  11. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    That's awesome, I'll make sure to get a processor with an IGP then, as long as a graphics card won't aid performance in any way.

    Yeah I completely agree on all of that. So basically I just have to set it all up once since I just have to choose a time for it to run scheduled checks right? Then it does it on it's own every week?

    I'll set the time for while I'm asleep (and while I'm at work), to not have these checks happening while I'm working on it (saving stuff or accessing it) at the same time. So then I could just check the logs when I'm home?

    Hmmm that does sound like quite a trade-off :/ I'll really have to think about that and see which option to go for :/ So will I still be able to access my files during these verification and rebuild times?

    Yeah I assumed as much for the free space, makes perfect sense that it would need it. So I'll need to run defragmentation on the server? Didn't we say earlier in the thread that unnecessary defragmentation could do more harm than good, or is that only for non-ECC systems? Will defrags be run on schedule as well, or something I have to do routinely myself?

    Yes I would DEFINITELY want a case with very good airflow, it gets quite hot here in summer (36 degrees celsius indoors). Yes I've seen cases that come with 2 front intake fans.

    So if there is a disk failure it is best to not access the server at that point, and wait for the verification to be done? This is only in case of a disk failure right? While it's doing the routine scheduled checks I could use it like normal?

    Yeah during sleep (and while at work) as stated above seems like the best idea. I wouldn't want to continue straining the disks by accessing them at the same time that they're trying to rebuild a disk failure.

    So in short, RAID10 over RAID6? If that will make the data have less chances of errors.

    Got it, makes sense. Any ways around this 'drive reports data intact, but it isn't -- this is fairly rare in practice'?

    That's very good to know, and very important for me, cause some files were extracted with this gibberish as file naming from an earlier laptop I had which wasn't on Japanese locale :/

    Exactly, I was thinking using robocopy, and select the folders I would like to be synced, when taking a routinely monthly back up.. while being VERY CAREFUL of the direction the sync is happening in!

    Got it ok, while I've never used code, I think I got what you're getting at with the symbolic link now. So is this syncing instantaneous? It won't affect gaming performance at all?

    FreeNAS, got it :)

    That's awesome news about the low malware threats! So for scanning the files, if I'm saving them through an ECC desktop directly onto the server, I run a virus scan like I normally would on a specific drive, but choosing the networked volume instead? (using Windows on the ECC desktop?) Or is it something I will be running from FreeNAS?

    Thanks :) Makes sense to not decrease performance in the server and where needed.

    I wasn't thinking of having RAID in the ECC desktop, since I was planning on saving everything directly to the server from it. I think having RAID10 (so 3 way mirroring) + 2 external drives as back ups of the RAID10 array is sufficient enough? (so that's 3 copies of the data in the array, and another 2 copies in externals)

    I figured the laptop wouldn't have ECC either. So I'll start using the ECC desktop to do the saving of new data, directly into the server, this is the safest I could get? (There are no more additional features to look out for apart from ECC and RAID that could guarantee more safety of data? I don't want to end up building 2 more machines, to then discover there are better/additional ways to keep my data safe which I didn't implement). So accessing (only reading) this data from my non-ECC gaming rig will do no harm to the data stored on the server? Also, copying a file from the server, to the SSD in the gaming rig (like getting a game out of storage), will have no bad effects on the actual file still stored in the server?

    Same here, I like separate roles, like how I have the gaming rig which stores all my gaming stuff (not just games, even soundtracks and anime art related to the games I play), and a laptop which is just for family photos and other general use stuff.
     
    Last edited: Dec 8, 2017
  12. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    What would happen if the motherboard controller or the addon disk-controller were to fail? Would the drives corrupt or lose data?
     
  13. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Yep, server will only have an alpha display -- so video performance should be completely irrelevant. Infact, you can go in to bios and reduce the IGP memory share to as low as it'll go. (to have more RAM available for disk-cache)

    You can even get email alerts from work if an issue is detected. Also, you can "pre-install" a drive to work as an instant spare, so that if an error is detected the spare can be readied to go in the place of the failed drive.

    -If you have a spare, the RAID will be rebuilt automatically and you just remove the defective drive at your convenience after the rebuild is done.

    You can access all of your files during both a verify and repair, however in both cases your access time, read, and write performance will be significantly worse during this. (since the RAID is busy while you're using it and trying to do two things at once)

    On defragmenting: Not quite even talking defragmentation here. Having more free space will naturally result in less-fragments in newly written files.

    --When you have very little free space, the OS may be forced to choose small 'segments' of contiguous space to use for writing. The more you have, the higher the odds a full file can just be written in its entirety without being broken up across free-space-segments.

    EDIT: Technically all file systems will accumulate some amount of file and free-space fragmentation over-time. I personally don't bother defragging them, though I know alot of people who make a big deal out of this.

    For me, any gain in performance (from a defrag) is not worth "ANY" risk of data loss (so, I just don't do it on my file-servers). However, unix filesystems are actually pretty good at dealing with fragmentation, and modern filesystems as a whole handle this pretty well. RAID10 is going to give you very high performance either way, which will mitigate most any effect of fragmentation over time.

    You'll see arguments over it all over Linux and BSD forums, there's people who advocate defragmenting, and those who argue against it. I say just pick what's important to you. Since you're looking at ECC and RAID, you already know what you predominantly care about (data integrity).

    --- Still, leaving the OS / filesystem some free space is a good idea, since it'll reduce file-fragmentation that you accrue vs filling to the brim.

    Generally the idea behind RAID is to "maximize uptime". You can actually use it during a rebuild if you had to, but like with overusing a computer and running many processes at once, this will slow down your rebuild-time / make it take longer. I usually try to minimize how much I touch a RAID during repair, but otherwise use it whenever I feel like it during verifications (where no damage is known yet).

    There's no right or wrong here. In a business setting, RAIDs are intended to be used during repair as a downtime costs money.

    3-way mirror is the most reliable defense against it. (the voting logic & having a third copy)

    -- Per RAID 5/6 vs RAID10, you'll find almost all datacenter operators use RAID10. There's a reason, which is that RAID10 is fast and RAID1 is the most-reliable in practice. (despite the high space-cost)

    On paper RAID5/6 look fantastic, yet "in practice" and "in theory" are entirely different things. The numbers show that in deployment, nothing protects data as well as RAID1 / 10 in terms of successful recovery rates.
    (Google has alot of studies on this in their server operation)

    It does have a small performance impact, but in this case most of your game is local on the SSD, with the exception of the score file or save-data. It's like rigging a best of both worlds thing.

    -Redundancy for what matters, speed for the rest of the data. Windows' symlinks will do this totally transparent to games, and they work as if there's no symlink there.

    Yeah, you'll probably have to run your virus scanner from a Windows machine, and scanning a network-volume should be supported / pose no problem.

    -There's ClamAV which can run on FreeBSD and scan for Windows malware, yet I would not trust its detection rate vs a scanner designed for Windows software that's run from Windows.

    ^ More than sufficient. You can basically use the network-drive like your fixed disk where data matters, and ECC will make that extremely safe.

    -Only need a local RAID if you want locally stored copies of data pushed back and forth to and from the server.

    --Beyond a RAID, ECC, and disconnected backups (which somewhat protect against malware infection if you notice quick enough) -- the most important thing is good security-practices and common sense.

    If you use your noodle, you have off-site backups in the cloud (or protected like in a fireproof safe), and you combine RAID redundancy with ECC + non overclocked computers, you'll be as well off as you can get (this is business grade). eg, doing everything realistically possible to protect against data loss.

    * Essentially, you get the hardware protection ... yet hardware protection is not protection from acts of god, stupid mistakes, and other people.

    Uncorrectable IO errors in saving data to a disk can cause drive-corruption, but this is both extremely rare and also something that 3-way mirroring helps protect against.
     
    Last edited: Dec 8, 2017
  14. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Or get 8GB or RAM instead of the suggested 4GB? :p

    That is awesome and very helpful, thank you. I'll definitely get a couple of spare drives in case of a failure. The fact that it starts rebuilding onto the new drive instantly (so shorter time that one of my mirrors would be down) puts my mind much more at ease.

    I get that, so that's why it's best scheduling it for while you're sleeping and at work. So how often will it be doing a verify? (daily, weekly?) I guess repairs only when something is broken/failed?

    So no defragging then, as you said, this is going to be an ECC + RAID system, its not performance that matters, but the safety of the data.

    So, leaving around (not less than) 10% free space would be enough?

    Ok so I'll avoid using it during a rebuild then. Rebuilds only happen when something breaks though right? (like a drive failure for example, so something that would be rare). It wouldn't be doing a rebuild for example every week? (maybe rebuilding file structures or something for better data integrity to make sure everything is still intact). Those would be the verifications instead? Remember I've never used FreeNAS or a server so I have no idea how frequent this stuff happens. So I assume FreeNAS will alert me properly when it's doing a rebuild instead of a verification.

    I understand that, but I'm just a home user, so no money lost in my case. I'd just wait it out until the rebuild is done.

    Got it, then I'll definitely go for RAID10.

    This small performance impact, will it only take effect while the game is saving? (since the score/save file will be the only thing symlinked) Or is it for the overall performance throughout?

    Got it, thanks :)

    That's good to know. So if everything is being saved from the ECC desktop to the server, but I'll sometimes need to use the gaming rig (non-ECC) to view some stuff from the server, there is no chance it could inject errors into the files I'm viewing as long as I don't save? (For example needing to watch a gameplay video, or read a word document but not add any alterations to it). Even when getting a game out of storage from the server, copying onto the gaming rig's SSD, the file that is on the server can't get any injected errors? Sorry I'm stressing this, I just want to make sure since the gaming rig is non-ECC. No point in building a server with ECC, and an ECC desktop, if then viewing the data from my gaming rig could still corrupt/inject errors into them.

    That's good thanks. By disconnected backups I assume you mean external drives (like the 2 external drives method I do, just back up into them and store away safely)

    By off-site backups in the cloud, you mean a cloud / online storage service like iCloud, Google Drive, etc? Yeah 1 of the 2 external drives I store away from home in case of a fire or theft :) That's good to know.

    More of a reason to go for RAID10 then.

    So, when I want to take that routinely monthly back up of the server to the 2 external drives method I mentioned earlier, do I connect the external drives to the server directly, or to the ECC desktop and access the server through that?

    When moving a gameplay recording I want to archive from the recording drive in the gaming rig (gameplay footage comes to around 12 GB), to be safely stored in the server, will it be as fast to copy to the server as how much my maximum download/upload speed is? Or that doesn't take into affect since we will be connecting to the server by LAN? Same question but the other way around, when taking games out of storage from the server to the gaming rig's SSD, transfer speed will be as fast as my download speed?

    Thanks again for all your replies and time in this thread. You have been beyond helpful and so informative.
     
  15. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Once every 1-2 weeks is probably more than enough for scheduled verifies -- this is something that you get to decide (the frequency).

    You just want to make sure the disks get checked from time to time so that one doesn't silently fail, as these verifies are the only-way to catch silent failures. RAID verify doesn't touch your data and only reads it back, it's just a check to make sure everything is sane (so to speak) and that the data copied across your disks both match up and can actually be read.

    -And yep, you only repair / rebuild the array if something goes wrong. (some inconsistency is found during a verify)

    10% on a very large volume (many terabytes) is a pretty good amount. It's not like SSD-RAID (without TRIM) where free space has an effect on disk lifespan, but rather just so that the filesystem has room to function properly.

    Should only happen when things break (data mismatch, inability to read some portion of the disk [URE -- unrecoverable read error, bad-sectors, spindle / seek issues], or a total drive failure). If you start seeing frequent rebuilds, and a drive isn't completely offline -- this would be the tell-sign that something is wrong and that it's time to inspect the SMART data to look for relocated sectors or other problems.

    FreeNAS should also have a record of what's going on with the disk controller's logging. So really it'd just be time to dig in to the logs if you saw something like this to find out which drive is responsible.

    ^ In my personal experience with FreeNAS, FreeNAS will automatically eject the failing drive from the array once there's too many errors, and before that "flag" the failing drive with an alert in the web-GUI. So it's just a matter of checking in that things are running smoothly from time to time.

    If you're symlinking your save game data, you might have slightly longer time loading save games (although I've never noticed it). OS write-caching should completely mitigate / hide any speed impact of writing save-games.

    I'll stress that I've never actually seen a performance hit, such as with F6 / quick-save and rapidly storing save-games. This creates no more stutter for me in saving than directly writing to an SSD. Windows' disk caching does a very good job with these things.

    ^ Right, as long as you're not routinely writing back data you should be golden. Even if your non-ECC gaming rig has some bad DIMMs in it, and even has signs of local disk corruption from it, the data on the NAS that you're reading should still be intact.

    Yeah, cloud storage is pretty much the same concept as having files in two physically different places. This I'd call acts of god and crime protection.


    --Just like your RAID won't save you from a flood, fire, or someone coming and ransacking your place. The same goes with malware and infection. If you wind up getting a virus that scans for all images and replaces them with squid, or ransom-ware that starts encrypting your files. Your offline copies that aren't in constant sync are your last line of defense.

    I'd connect your USB disks to the NAS and perform an rsync (or similar) of your shared volumes to each of them from there.

    For reading files back, every disk in the array will be used there in tandem (both the mirroring and striping), for writing files you receive the benefit of striping but not mirroring. RAID10 is quite quick for both reads and writes, yet technically faster in reads.

    --Anyway, your disk throughput is enough that you'll saturate the typical gigabit connection before hitting any disk-IO walls. The best way to speed up transfers to the server if you find that it's not fast enough, would be throwing a 10gigabit NIC in there. Overall, on a gigabit link you'd expect about 100mb/s, which is not bad by any means.

    EDIT: If you do go for a 10gbit NIC, be sure to read around on BSD compatibility. Can't stress enough how picky that BSD is (for anything non Intel), heh.
    --Another option is NIC-teaming, though I personally would suggest that you just try things with a regular 1gbit LAN first. Gigabit is fairly quick already. (despite that you may not be getting the full speed of what the array can do)

    No problem, it sounds like you're getting a pretty good idea of the jist of what you want to build, how this could be setup, and how to do more for protecting files than you're currently doing. You already were on the right track with a good habit of routine external backups coming in to this.
     
    Last edited: Dec 9, 2017

  16. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    That's cool, I'll set it up weekly then.

    Those verifications sound very helpful knowing it's testing every file to see that it could actually be read. That will really give me peace of mind having a weekly check knowing that everything is ok.

    That's awesome, and with RAID10, if it detects an inconsistency on one of the drives, there are another 2 copies of it, and it gets rebuilt with the 2/3 logic RAID10 uses right? So could an inconsistency be found during a verify where all 3 copies are unreadable? (except from getting some kind of virus)

    Define 'many terabytes' (everyone has a different perception of the word many lol). I would ideally want it to have 20+TB of storage (I already have 7TB worth of stuff). So since RAID10 uses three-way mirroring, for every disk I would need 3? So to get 20 TB, I would need 30 x 2TB hdds (and anther 2 for the OS)? (That's why I had asked if it's worth building it with 4TB drives).

    This is what I don't get. I saw this image: (https://en.wikipedia.org/wiki/Nested_RAID_levels#/media/File:RAID_10_01.svg) of a RAID10 configuration, how is that three-way mirroring? A1, A2 etc are only on 2 discs each.

    That's awesome, so hopefully not frequently haha xD 'If you start seeing frequent rebuilds, and a drive isn't completely offline', I don't understand this, aren't the drives meant to be online, not offline?

    That sounds awesome about FreeNAS, the logs will definitely help.

    This is a heavenly piece of information. So if I have the extra pre-installed drives to act as spares, FreeNAS will eject the failing disc on it's own, rebuild the files that were on the ejected disc using the 2/3 logic RAID10 implements, onto one of those spares, and automatically sets up that disc as part of the array? That would be amazing xD

    So, if for example disc 3 were to fail (with the sticky paper labelling 3 on it), and it rebuilds disc 3's data onto one of the pre-installed spares (let's say number 15), once it's done, should I power down the server, and put disc 15 in disc 3's place (and relabelling it as disc 3, and hooking it up to disc 3's sata cable and power connector), or leave it as is.

    If it's only for loading times and not performance drops while playing then that's a very good feature having the save files onto the server instantly :)

    That's very good news. Thanks. It's good knowing the data in the server will stay safe, no matter what machine is viewing it.

    Eh, thought that's what you were referring to. I'm not really a fan of cloud storage (having my data on other people's machines).

    That virus would be terrifying. . . x_X So this virus would work it's way in from the gaming rig or the ECC desktop I assume? (since the server won't be used to browse the internet and just acting as storage). Since we said earlier that virus scanning will be done from my Windows pc scanning the networked volume, does this mean the server doesn't have it's own antivirus? Does this squid virus only attack servers, or non Windows systems? I've never heard of something like it, completely horrific x_X

    Got it and noted :)

    Oh ok, that's cool, I was just a bit concerned cuz my internet speeds are 10mb/s down, and 1.2mb/s up. If a 12GB recording had to be uploaded at 1.2mb/sec, it would take hours! If it's around 15 mins I'll patiently wait for it :)

    I think it's better going for the Intel NIC's if we know there's better compatibility. Like I said, I'm not in a rush, as long as it's not hours, I'll wait it out knowing that the recorded footage is going into archived storage safely.

    Haha yeah, just a jist for now though, once I actually build it is where the sleepless nights come in till I get it all set up and safe how I want it :)
     
  17. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    We'll look at this from the two types of errors that you could see. Where data is completely unreadable (total disk failure), or the case where data is readable yet simply wrong (a form of a URE).

    --If all disks are functioning yet a mismatch is encountered during a verify (meaning data is read back and doesn't match the other disks), this is where voting logic comes in. The data matching on the other two disks will be used to repair or regenerate the data from the third drive. (so, single drive errors are scrubbed running your routine verify)

    Mathematically it's possible for this to happen on all three at once, yet the odds are insanely low. A single error of this type happens once every ~13"terabytes" of data written to a drive. Even with a single disk you have a very low chance of it happening, and each time that you add a mirror of data you're taking that probability of failure to the second-power (eg, multiplying it by itself). With 3-way mirroring, the chance of it is effectively [error ^ 3]. Bear in mind that the single disk errors are the ones to worry about, and these are totally protected against by the array.

    ^ It's also far far more likely that the disk will report the data as unreadable, in which case you don't even need voting logic.


    --If one disk has many inconsistencies and gets ejected from the array (deemed something wrong / it's untrustworthy), odds are that you'll have two more copies of that data. In RAID10 3-way, you may even be able to lose more than one disk and STILL have two more copies of the data on the remaining disks. As you're performing a routine verify of the array (eg, weekly), chances are in your favor that the copies stored on the remaining disks will match eachother. Just like URE's, it's "technically" possible for two or three (or more) disks to have an error at the exact same time, yet this is where math comes in and states it's like winning the lottery (so low odds that it probably won't happen in multiple human lifetimes).


    Anyway, take a look at this configuration as an example, we'll list a three-way mirror (RAID1) with two disk striping (RAID0). So, this is a RAID10 array:
    A1 + A2 <= RAID0 striped
    B1 + B2
    C1 + C2

    *I'll use a slightly different labeling than the wikipedia notation, in my notation, the lettering indicates the physical disk, and everything in a row could be accessed as a volume. (hopefully this will be helpful)
    As, Bs, Cs, comprise "RAID0" striped-sets, each of which are 2x hard-drives each. Ontop of that, we have mirroring of As, Bs, and Cs (RAID1). Meaning that all of these disks are in one big array.

    Now, assume that something like the following happens, the underlined disks "die" simultaneously.
    A1 + A2 | A1 + A2
    B1 + B2 | B1 + B2
    C1 + C2 | C1 + C2
    ^ we still have two copies of all data in both of these situations, so we're in good shape and we can rebuild the array (replace those failed disks and we're good to go).

    A1 + A2
    B1 + B2
    C1 + C2
    ^ also still recoverable, as B1 and C2 are still intact. -- 4 disk simultaneous failure.

    We could get really really unlucky, of course, and C2 could also have a URE on it in addition to the failure of B2 and A2. However, keep in mind that this would mean that URE would have to have appeared in the last week (due to our verifications). Overall it can technically happen, yet it's so close to a 0% chance of happening that you can probably ignore that risk. Also bear in mind that if you got super super unlucky, you still have your USB External disk copies of your files -- which are driving down the odds even further of complete loss.

    In jist, unless you're counting on winning the lottery (multiple times) the odds are pretty strong that your RAID will protect you from catastrophic failure. :D

    --It's just a matter of replacing those failed disks soon as you notice them. (as your amount of protection goes down while the array is degraded)


    EDIT: **You can also use ZFS (filesystem) for even more protection here, as ZFS's hashes can be used to recover from URE's on a single disk in a two-disk mirror. (where voting logic cannot be used due to the death of the third-disk)

    --Also be aware that ZFS likes having lots of free-space to work with, and RAM. If you use ZFS, you'll have to reserve more than 10% to stay free. (covered on the wikipedia page a bit)

    Even with 2TB, 10% would be good.


    Per Wikipedia: On the right-most image, notice in the description -- "A hybrid RAID 01 configuration"

    This illustration is not a pure RAID 1+0 form. Some portion of the array is just mirroring, and some contain striping + mirroring (it's a mixture). They're using different combinations for different logical volumes across the same collection of disks.

    The illustration is to show that you could for example have a section of striping with no mirroring (data that's for performance only), data that's mirrored with no striping (just redundancy), and data that's both striped and mirrored (performance / space + redundancy), on the same collection of disks. In actual practice there's not much incentive to do this in a NAS, but you might imagine on a two SSD's, you could have a set of RAID1 for your files and the OS, and a set of RAID0 to not waste space for games.

    EDIT:
    Yes, for 3-way mirroring, every storage disk is x3. Your math is spot-on, and you'd need 30 disks at 2TB each. You could halve this number if you used 4TB disks.

    If you really need 20TB, you might need to consider 4TB drives to make this feasible. In all cases you will need to buy yourself some addon disk-controllers (cards) for the server. (They can be cheap and don't need any form of RAID support, just standard SATA controllers)

    ^ Make sure you get yourself a nice full sized ATX board that can take a few PCI-E cards. Thanks to Software-RAID, all of these controllers can be used together in conjunction with the motherboard disk controller, and disks can be used in the RAID array across all of them. --Typically an addon controller card will handle 4-8 drives. There are some more expensive ones that can take 12+. (sadly, the price scales up fast and exponentially as the number of connectors goes up)

    *On the OS volume: FreeNAS itself won't need much space at all. You can get a couple small disks for this (as small as you can find, like 250-320 GB's). So these don't need to be 2TB's.

    When I'm saying online and offline, I basically am meaning if the disk is part of the array. Taking a disk offline would be removing it from the array.

    ^ YEP, pretty much that's all there is to it.

    Pop the failed disk, move over the new disk in to its space, put a new spare in, move over your sticky label (for disk 3 and the spare), and then RMA (if still under warranty) the failed disk or toss it.
    (you might also want to securely wipe it or totally destroy the drive if there's worries of someone getting at your data)

    As the gaming rig and your other machines have the ability to write to the NAS (having it mounted), viruses can propagate from one of these machines to the NAS. I don't mean infecting the NAS persay (in that the infection won't be running on the NAS). However, malware can still infect files stored on the NAS, which is something to be cautious about. This is just like how any infection can be copied to a USB drive, or infect a USB drive whenever it's connected to a PC.

    The odds of your NAS being compromised are fairly low, most malware will treat it just like a disk-drive and aren't even aware of it beyond that. Put another way. the NAS won't read or execute what it's storing, it just performs its purpose of serving up files.

    To be infected, the malware on your machine would have to gain something like SSH access to the NAS. (eg, to force the NAS to run software or to make changes on it) Very very unlikely, as this means the malware specifically knows how to attack FreeNAS.


    *FreeNAS also has login credentials, and you can lock down access to your NAS / should do this.
     
    Last edited: Dec 10, 2017
  18. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    That sounds amazing with a very low chance of it happening, so I'll definitely go with 3-way mirroring!

    If it detects something as unreadable, instead of using voting logic I assume it will just delete it and rebuild it using the other 2 disks yeah?

    Thanks that's a much better way of explaining it than the image I linked to.

    Seems very safe, having 2 mirrors to build off of.

    That's awesome, so basically as long as there is a 1 and a 2 still intact, doesn't matter the lettering, all the data can be rebuilt efficiently just from those 2 discs? I assume getting 4 hard disk failures simultaneously is a very rare occurrence?

    Could I stripe only 2 drives together? Or is it possible to do it with 5? (So I get the 20TB storage limit I desire). Picture the set up below with 4TB drives (with OS being where the OS will be on 2 separate drives being mirrored in RAID1, and S1,2,3 being the spare pre-installed drives:

    A1 + A2 + A3 + A4 + A5 (RAID0)
    B1 + B2 + B3 + B4 + B5
    C1 + C2 + C3 + C4 + C5
    OS1 + OS2 (RAID1)
    S1, S2, S3

    So that will give me 20TB of storage, with this 20TB also being mirrored twice (so 3 copies in total)? If so that sounds like a dream come true! xD

    Yes, yesterday I did some more research on FreeNAS and came across the ZFS Filesystem you are referring to, and it sounds like a very good option.

    Yeah I meant to ask you about the RAM. I came across it here:
    http://www.tested.com/tech/500455-building-home-server-using-freenas/
    'The recommended configuration is 1GB of RAM for each terabyte of storage in your ZFS volume; however, that seems less important once you cross 8GB of RAM.'

    And from the FreeNAS site hardware requirements:
    http://www.freenas.org/hardware-requirements/
    Minimums are ranging from 8GB to 32GB.

    A user was querying on reddit as seeing the '1GB of RAM for each terabyte of storage' as a bit excessive, but seems like lots of people told him it's quite a requirement:
    https://forums.freenas.org/index.php?threads/really-1gb-of-ram-per-tb-seems-excessive.10052/

    A couple of questions I came across from that tested link (http://www.tested.com/tech/500455-building-home-server-using-freenas/):
    1. 'FreeNAS stores the operating system on a USB thumbdrive and copies the OS from the thumbdrive to RAM at boot time.'

    Weren't we going to install the OS on 2 mirrored drives? Seems safer than a USB thumbdrive to me...

    2. They also mention ECCSD RAM? Anything makes this better than ECC RAM?

    3. Someone in the comments section wrote: 'The point of using ECC is because the file system freenas uses doesn't check the ram for accuracy of data, so if there is any malfunction or problem with the ram or the data copy over from the ram you can run into problems from files being corrupted and much worse. ECC ram makes sure the data transferred is correct.'

    Any truth to this, or did FreeNAS fix this? Or maybe that's why ECC is such a requirement.

    Yes it sounds like there are quite a few advantages to this ZFS, so I'll make sure to not fill up more than 70% as the wikipedia page you linked states. This makes it sound more and more like I need the 20 TB, cause from 20TB I'll end up with around 12TB left (from a 4TB drive you get 3.6TB available space, so that's 18TB available, and using 70% of that is 12.6TB). Correct? :/ (seems a bit on the low side)

    No wonder I couldn't get it :/

    Got it, SATA controllers and a full sized motherboard.

    For the OS, do these need to be SSDs? (If it will give performance boosts (especially faster rebuild times and verify times), then I'll definitely get them as SSDs)

    Got it, thanks :D

    That really is wonderful! xD

    Haha definitely destroy :p

    This is the same amount of risk as if writing to my local storage drive I currently have in my gaming rig though yeah? So, as long as I keep my antiviruses up to date in my gaming rig, and ECC desktop, and run routine virus/malware scans it'll be fine? (these scans should be quite fast too since the machines won't have too much stuff in them (since everything will be in the server).

    That's reassuring to know :)

    This I didn't quite understand, and it sounds important :/
     
    Last edited: Dec 10, 2017
  19. A2Razor

    A2Razor Guest

    Messages:
    543
    Likes Received:
    110
    GPU:
    6800XT, XFX Merc319
    Right, for example if one disk has an unreadable sector (possibly formation of a bad sector), and if that data can't be read no matter how many times it's tried (disk fails and reports data-unreadable). Then in a situation like this, most RAID implementations are smart enough that this sector will be marked "bad" and the data mapped somewhere else, copied from the remaining disk(s).

    Linux and BSD software RAIDs (including ZFS) are quite good at handling stuff like this and won't necessarily take the disk offline unless it happens a few times. Soon as a bad sector crops up though, you'll get warnings in the logs letting you know to essentially replace that disk soon as possible.

    Both during a scan OR when simply accessing data (if an error is encountered), that on-the-fly repair of your data happens automatically.

    You can stripe (RAID0) any number of disks together that you want, and all space is totaled for the disks being striped. eg, you can stripe two disks, five disks, or even 20 disks, there's no artificial limit on this in software. How much physical space you have for drives and SATA-connectors is usually the main limit you'll hit.

    You've got it per the OS, though sticking with the notation that'd be this:
    OS A1
    OS B1
    (RAID1)

    That guideline is in my experience with ZFS sadly fairly sound, the filesystem loves memory and will use almost everything you throw at it. Having too little will significantly slow things down.

    --If you're going with 20+TB of space, I'd say at least two 8GB DIMMs (16GB) in there would be a good idea. If you start there and see any performance problem with ~24-ish TB disk-space, throw another 2x4 in there and you'd be good while sticking with dual-channel.

    IMO, the guideline people are giving is for sustained sequential writes to your NAS. If you run out of RAM, what you'll notice is that writes start off very fast and then performance tanks (drops off) quickly since the OS starts having to wait on disk IO.
    (ZFS eating most of the RAM, and too little remaining for write-cache)

    You can install to HDD's, and I STRONGLY recommend that you give FreeNAS its own drive. USB thumbdrive = "no swap space".
    -Giving the OS a real installation gives you safety if you run out of memory for any reason, and gives the OS a better ability to deal with RAM fragmentation.

    EDIT: See section 2.3 -- https://doc.freenas.org/9.3/freenas_install.html

    There are memory DIMMs that have the ECC-circuitry built in to the module itself, where the RAM can perform self-scrubbing of errors independent of the processor driving them. I personally don't see this as any better than performed by the processor though. It does have an advantage in that you can use these in non-ECC systems technically, but then you receive no notification of errors.
    (which is half of the usefulness of ECC right there, telling you when DIMMs are failing and halting the system when errors are uncorrectable)

    Sadly you can't just write software that performs the role of ECC, "in software". If the software itself (that attempts to correct errors) is running from non-ECC memory, then your "scrubbing" software itself can become corrupt. While what they commented is true in a sense (in that a software implementation "can" reduce the rate of errors), there are math based proofs that show it's impossible to deal with other than with extra hardware.

    I'm not sure it'd be worth it since a software solution won't eliminate errors completely, yet would have a major performance hit (and added memory cost) to even try. aka, if you need more RAM added to deal with the errors (lose usable memory space), might as well just purchase ECC memory at that point.

    From what I've seen you get pretty-good performance with only 20% free on ZFS. I would shoot for one-extra 4TB disk per stripe. (three excess disks, or 24TB total capacity, with roughly 20TB usable space)

    Nope, don't need to be SSD's, and won't really give you much of a performance gain. Infact, cheaper SSD's (without capacitors) cannot properly be used in a mirrored RAID, due to the risk of partial rollback on power loss (which will break the array / require repair). Only benefit you'd get from SSD's with FreeNAS is a faster startup, yet your machine is always going to be running.

    Go HDD's, it's just more economical and safer.

    Same style of risk as always, yep.

    This is why good security and use-practices are more important than anything else. As long as you stay away from the illegal-stuff, be smart in viewing emails and following links, keep everything updated (especially your OS and virus scanner), and limit your porn to from Linux (if it must be done) -- you'll be fine. ;)

    Just employ common sense like always. The best protection there is really.

    Similar to your Internet router your FreeBSD NAS will have a web-based config, and also a remote-shell (eg, like a command prompt on the server) --- Both of these have user-accounts that you can configure to protect against other people accessing your NAS setup.

    Or basically, the first thing you should do after you get the server running is change the default account settings. That's all I meant there far as security is concerned.
     
    Last edited: Dec 10, 2017
  20. 321Boom

    321Boom Member Guru

    Messages:
    122
    Likes Received:
    12
    GPU:
    GTX980 Ti
    Got it, that's really assuring.

    Regarding 'letting you know to essentially replace that disk soon as possible', if I have 3 pre-installed spares drives though won't it just rebuild on it's own without telling me I need to replace the disk?

    Why is OR in capitals here? Is it important to not do both these things at the same time?

    That's awesome about the striping, thanks :)

    Thanks for correcting me on the notation, yeah since they'll be mirrored not striped.

    So is ZFS the safest/best filesystem I could implement? Is this something that FreeNAS creates by default when installing it's OS, or do I have to set it up?

    I'll give it all the RAM it needs to work as efficiently as possible. Don't want it working in a slow or degraded state.

    Yeah I don't like the idea of installing the OS on a thumbdrive, just thought I'd bring it to your attention.

    That guide to install FreeNAS is RIDICULOUSLY HELPFUL! Thank you!

    Wouldn't that put the processor under less load though since the RAM is taking care of the error-correcting? (so the processor could continue with other processes instead of doing what the RAM could do?)

    Interesting. Goes to show how much of a requirement ECC is more and more.

    Got that, makes sense to go for the extra disk. Just curious, how much noise would the server make with 23 disks? (6 striped x 3, another 2 for the OS, and 3 spares). I most likely will need to put this in my room (will it be too noisy to let me sleep properly?). Also, what would be the ideal ambient/room temperature for it? It gets to 36 degrees in my room in summer when I'm on the gaming rig. (that thing is like a heater when I'm gaming and recording!)

    If it's just for startup, then yeah no point since it will be on all the time.

    Haha screw the economical part, I'm sold once I saw that safer :p So HDDs it is then :) (Just curious, why do you have 2 SSD's in your server? I thought those were for the OS the first time I saw your post saying '24 HDDs and 2 SSDs')

    That's good to know :)

    Haha 'limit your porn to from Linux (if it must be done)', guess my non-ECC laptop will still have a purpose afterall then lol :p

    Yeah common sense and paranoia :p

    Got it, I'll definitely look into these once I get it all set up.

    Thanks once again for all your time and replies, I really do appreciate it and feel like I've learnt so much more thanks to your help :)
     
    Last edited: Dec 11, 2017

Share This Page