File Server Storage Planning

I am planning a large file server and I would like your advice and suggestions.  We currently have 4TB of data and I anticipate, based on growth history, we will grow to 25TB within the next 5 years. I am planning our storage infrastructure for this.  Our files are 95% .jpg files and 5% word documents, excel docs and other misc. documents.  We have 2 servers.  One in building A and one in Building B (DR Site) that are connected via GB LAN.  The plan is to replicate the data from serverA to ServerB using DFSR.  This will provide high availability in the event of a failure on ServerA.  We also have a 3rd server in buildingB that is a DPM2010 server which will backup this data.

I am trying to decide between to options and would like to know your input and advise.

Option 1:  Split our data into 4 volumes and then grow those volumes over time by adding storage and spanning each disk as necessary.

Option 2:  Put all data on the same volume and grow that one volume over time by adding storage and spanning the disk.

 Taking into account our scenario, which decision do you recommend and why?
ITPro44Asked:
Who is Participating?
 
Matt VConnect With a Mentor Commented:
I know with many SANs I have worked on there is a manufacturer recommended 2TB limit on LUNs, due to the performance degrading significantly after this size.
Also, Windows and other OSes have limitations on partition and volume sizes that are architecturally enforced outside of performance issues.
"If you're planning on booting to a disk or RAID volume that is greater than 2TB, be aware that the only way this is supported is with x64 versions of Vista or 2008, using GPT partitioning scheme, and using an EFI BIOS.  Doing so would put you on the bleeding edge, and I wouldn't recommend it.  
Also note that the 2TB limit is not strictly speaking a limit on the partition size, it's actually a limit on the sice of disk that the MBR partitioning scheme can address.  MBR can address a maximum of 2^32 sectors.  Each sector is usually 512 bytes.  Hence, 4 billion * 512 = 2 trillion -> hence the 2TB limit.  
As stated above, GPT should handle a 12TB non-boot partition without a problem. "
-- From: http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Server/Windows_Server_2008/Q_23913594.html
0
 
Matt VCommented:
Disk volumes generally have performance issues (if not compatibility issues) around 2TB.  I would suggest you examine how much of your data is accessed on a regular basis and consider tiered storage where you would put "in use" files (determined by last access date) on the faster media, and migrate the older files to slower cheaper storage in whichever array will work.
You may wish to consult with HP/Dell/IBM (whomever you are using for storage) on this type of setup.
You would defiantely see better performance on the files that are frequently accessed.  
0
 
ITPro44Author Commented:
thanks Matt, that is a good suggestion.  At this point in time our hardware has been purchased and we are invested in the current infrastructure described above.  I think this will be something we consider next time this project rolls around.

You mentioned that volumes have performance and possibly compatibility issues around 2TB.  Can you expand on your knowledge of this?
0
Simplify Active Directory Administration

Administration of Active Directory does not have to be hard.  Too often what should be a simple task is made more difficult than it needs to be.The solution?  Hyena from SystemTools Software.  With ease-of-use as well as powerful importing and bulk updating capabilities.

 
ITPro44Author Commented:
I do not plan to boot from this partition is will be for data only.  Have you, or is there anyone else who has had experience with 20TB+ volumes that can provide me with some guidance or best practices for configuring and managing large volumes?
0
 
Matt VConnect With a Mentor Commented:
We currently store about 20TB, but all in 2TB chunks :)
0
 
ITPro44Author Commented:
:)  What is your reasoning for storing it in 2TB chunks?
0
 
Matt VCommented:
Vmware limitation on storage luns.
0
 
ITPro44Author Commented:
gotcha.  Do you span your volumes or use something like DFS?
0
 
Matt VCommented:
Currently we do not have any volumes larger than 1.5TB, so we have been lukcy.  Ask me next year though and I might have a different answer.
0
 
ITPro44Author Commented:
Thanks for your help.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.