I have an issue that I will explain in more detail in the next question. It has to do with file reallocation of the database files across a recently added 'disk' on the SAN.
I think the primary concern here is to test the setup in a lab environment with easy access to the physical files. A copy of the database and its files should be created on a VM on a local machine. The problem is that there is only a WAN connection via a VPN. This is fine when most operations are server side but not so much for transferring large files and the bandwith it would consume. The main database is just under 300GB. There is one or two more non-trival databases.
An RDP session to the database server is the only real option. Setting up additional transfer protocols (FTP etc) are not really feasible. Is there a way to 1 - divide up the files of the larger databases into chunks to be transferred over a period of days and reassemble once all of the pieces are received. We did this back in the modem days with very large (comparatively) files. The entire database is preferable but I am not sure why. 2 - Script out the schema, which is a big job in of it self and then a subset of the data with a concern being that I get most of the records in the low row count tables and a subset of the high row count tables.