Sometimes people include them in the filenames themselves, yep. Those hashes can also serve as unique identifiers for files, such as for database storage and lookup of them (kindof like an image GUID - global unique identifier). Viewing images is for all intensive purposes "good enough", but the human eye won't necessarily detect miniscule damages just like most people won't see or notice a quality loss with image compression (like jpeg). There's probably more post processing going on with the other ports for things like dynamic contrast, edge, or color enhancement. In the worst case modern TV's have motion interpolation too which adds seriously enormous amounts of delay. --Shame that the ports don't have any options buried somewhere in the menus, yet it's good that you've found the TV is doing some stuff differently on each. If you wanted to use it, you'd need a kernel that supports it on each device. Though that said for a home-network situation I doubt that multihoming would be much benefit to you. mptcp is more for if you want to combine multiple connections together, or get extremely high transfer speeds over a high latency link, or achieve things like "connection redundancy" (handoffs from one connection to another). --You may be surprised yet there's no official Windows mptcp implementation (just like there's no official Windows "SCTP" implementation either, heh). Closest that can be done is using a Linux or BSD VM as a router run on the same machine (router inside computer, for the computer itself). There's some OpenSource projects doing just that as a short term solution to get mptcp on Windows... Apple is pretty much at the forefront in pushing mptcp (for global adoption), probably mptcp will be in the Linux and BSD mainline kernel branches before Microsoft gets onboard. (really ironic considering Microsoft is generally thought of as a technology leader) You'll find mptcp in quite a few commercial "Internet Bonders", yet the largest scale deployment is Apple's phones and servers. I'd personally have thought that Microsoft would have more interest in leading Internet protocol innovation ... though they're showing very little interest in doing so. Yep, you got it, as things are your clients "so far" are Windows machines. -The "d" at the end of OpenSSH there stands for "daemon". The OpenSSH project is actually both a client and a server (daemon), and you'll find that some Linux distros will throw in that "d" at the end of the service name to avoid confusion [make it clearer that it's talking of the server-component and not the client]. You can also install the SSH client and server independently, but the core openssh package tends to come with the collection of both together (more or less the complete solution / all tools you need to start a server and communicate with it). Windows, Linux, Android, iOS, you name it and there exists an SFTP client for it. Very widespread use, pretty much becoming a defacto standard for secure file access, even some webbrowsers getting support built in to them for it. NetDrive being commercial limits what OpenSource code they can "legally" use. OpenSource projects like WinFsp (which are basically FUSE' concept ported to Windows) are GPL. --In jist, unless a project is "L"GPL (lesser GPL) -- like "Dokan", then any works that link the GPL-code (dynamically, statically, doesn't matter -- use of that code in any way) automatically also become "GPL" (aka, public domain). In otherwords, source has to be released which means that it's difficult to protect your assets. Some licenses are more restrictive than others, but in general unless a library is something like MIT or BSD license, most companies will be pretty weary of touching them. ^ The desire to keep source "closed" isn't necessarily a bad thing, though it can mean reinventing the wheel when there's projects (like WinFsp) out there that already work. NetDrive is pretty stable, but it's a different project completely from the ground up. Anyway, getting back out of licensing those softwares all work similar (have same end goal). Whichever you pick and configure will mount an SFTP share as a 'volume' on the machine. That means you configure them once and they "do their thing". They provide what looks like a disk-drive to anything you use .. be that Windows Media Player, VLC, drawing programs, a text-editor, or Windows Explorer. Yes, ahem, this is for ease of storing files that you're going to work on when you're at home. Convenience, quality of life: yeah, that, maybe. (COUGH) "These third party clients are WinSshFs and NetDrive?" -- there's actually a ton and ton of them as you'll see when you search around, but yeah, they're all the same core concept. Mounting some remote-share with various protocols that Windows doesn't natively support. --It's possible to use Windows Explorer like-normal with those two clients. FileZilla and WinSCP are standalone programs that can transfer and synchronize files to servers. They don't mount the shares as a volume / drive, and they won't give you access to those network shares from other software. --They're more useful for infrequently accessed shares. For instance, say that you have a website that you might want to upload files to "occasionally", then it just may not make sense to leave mounted and you might prefer a solution like those tools. I'd say that the best bet is an automated scan with everything you can. If you have a machine with Avira, have that scan your files. If another machine has Malwarebytes, try scanning everything with that too, same goes for anything else including even Microsoft Security Essentials or ClamAV on the NAS itself. The more you use, the greater the chance of detection if some infection were to somehow sneak on there. --It'd be a good idea to tweak your AV settings in all cases because we don't necessarily know what the AntiVirus programs will recognize "as a network-share". For instance, it's always possible that they'll add detection for the third-party network mounting tools as well as Windows' native build-in mounting. The server itself would be fine (not infected), yet malware could still do damage ... like replacing your images with pictures of squid, infecting stored executables on the NAS, encrypting files (randsomware), etc. The more you can restrict access (eg, read-only), the safer you'll be in this regard since there'll be less files that an infected machine could touch or overwrite. See above, though to elaborate a bit: --Storing malware on the server alone does nothing beyond that [just copying files to the NAS doesn't infect the NAS or other machines necessarily] ... though any computer that can write to the NAS can also destroy data on the NAS (delete, edit, etc). This is why you'll want to give some thought to which machines should have write-access. The game machine only needs limited write access for transferring off videos. A media-PC would only need read-access to play movies, videos, and so on. As long as you can restrict access like that, and only do questionable things on machines that have restricted access, then your data will be very safe. Yep! You can do this on FreeNAS, or you could run it from Windows on your mounted volume (may be faster from the server over SSH -- due to no network-chokes). FreeBSD comes with everything you'd need to generate and check hashes for files. If you look around you'll find that alot of people have written "bash-scripts" and "one-liners" using these tools to loop over everything in a directory and write out or compare hashes to / from files. Example shamelessly ripped from StackOverflow (boards) of a one-liner generating hashes for a directory: Code: find ./path/to/directory/ -type f -print0 | xargs -0 sha1sum EDIT: (description of the above) Realize that this may look intimidating, but there's actually not much to this. Code: x:~$ mkdir test x:~$ cd test x:~/test$ echo "test" > 1.txt x:~/test$ cp 1.txt 2.txt x:~/test$ find ./ -type f -print0 ./2.txt./1.txt x:~/test$ find ./ -type f -print0 | xargs -0 ./2.txt ./1.txt x:~/test$ find ./ -type f -print0 | xargs -0 sha1sum 4e1243bd22c66e76c2ba9eddc1f91394e57f9f83 ./2.txt 4e1243bd22c66e76c2ba9eddc1f91394e57f9f83 ./1.txt Code: **ripped from the documentation** -f regular file -print0 True; print the full file name on the standard output, followed by a null character (instead of the newline character that -print uses). This allows file names that contain newlines or other types of white space to be correctly interpreted by programs that process the find output. This option corresponds to the -0 option of xargs. So what is all this? Well, in English version: mkdir test -- make a directory cd test -- move to that directory echo "test" -- writes "test" to standard output (to the console) > 1.txt -- pipe the output from echo to a file instead (creates a file containing the line "test") cp 1.txt 2.txt -- copy 1.txt to 2.txt (now we have two files in there, hooray) find ./ -type f -print0 -- sometimes when you want to see what you're doing it's easiest to execute it in 'parts' find is a command line tool to search files and directories. In this case we want to find files in "./", since that's the folder we're in right now (that newly created "test" folder). We get a single line output "./2.txt./1.txt" -- there's actually a character between those two that we can't see represented in the shell here, a "\0" (usually called a null terminator). That's being used instead of a newline delimiter because we asked for it with "-print0". "|" -- is used to feed the output of one program to another. " | xargs -0" -- xargs can take that null-terminator delimited output from "find" and make them in to spaced and quoted (if needed) command line arguments (basically make them ready to pass to another cmdline tool). -0 denotes that null-terminator will be used instead of newline like in "find". This output looks like: "./2.txt" "./1.txt" (two nicely formatted arguments to pass along) find ./ -type f -print0 | xargs -0 sha1sum -- So now all we need is to pass these in to a tool that generates hashes and takes a list of files. (like sha1sum, but could be sha256, md5, or any other tool that takes similar arguments) Everything seems like a flood at first, but there's also only so much that's out there too. When you first used Windows or DOS, it probably took some time getting used to as well. It's really much the same as picking up any new game, where the controls feel overwhelming and there's so many new systems and mechanics to learn. After a bit though, that all calms down when the realization hits that you're getting closer and closer to the ending. Playing with your file server, BSD, Linux, or any OS is much the same as playing a new videogame. Once you get the mechanics down, everything gets easier and easier with the more you experience. No problem as always, and you too! Also sorry if this seems that I'm basically preaching switching away from Windows at times with all this OpenSource stuff. I'm not even that strong an OpenSource advocate, but moreso it's just a matter of stepping out of that comfort zone just a little-bit and seeing all this awesome stuff that's offered out there (especially in the server space). Windows has its place, and so does BSD, and Linux, Android, Mac, and everything else built on them thereof [and especially OpenSource where commercial works just wouldn't happen]. They all have their strengths and things they're awesome at if you just push past the initial learning curve and see what they can do for you. It's like having a chocolate cake, vanilla cake, and so on. All are great, and having one flavor alone is great, but more is always better (no reason to limit yourself to just one).