Solved

Size of NFS exports with a EqualLogic PS6100 (w/ the FS7500 NAS heads)

Posted on 2013-02-02
2
672 Views
Last Modified: 2013-02-05
Hello,

We have a EqualLogic PS6100 w/ the FS7500 NAS heads and we have created  a 10TB NAS container.


We have about 3TB of data to export via NFS.   Our clients are all Debian 6.0.

We use automounts on the Debian systems and it can see the FS7500 w/out issue.   The question is how the data should be presented.

Currently, we have an export container called foo and it is 3TB.  

So, the clients just go to cd /foo and can see all of the data.

Clearly, the data is organized into directories.  Example:  foo/tools, foo/departments etc

My question is whether it makes more sense to create exports like this:

/foo/tools
/foo/departments

So instead of having a 3TB export we would have several exports of smaller sizes.   The automount can still see these exports and so that is not a problem.  Example:

tools export container would be 1TB, deparments would be 800MB etc


I am just wondering if there any advantages/disadvantages to breaking up the data?  Or is it just a matter of preference/ease of management.

Thanks
0
Comment
Question by:cyc-01
2 Comments
 
LVL 21

Accepted Solution

by:
robocat earned 300 total points
ID: 38849207
There are several things you need to take into consideration.

E.g. how do you make backups ?

-it may be easier to backup several smaller containers instead of a big one within your backup window
-different types of data may have different backup schedules or retention times.

On the other hand, one big container will waste less free space than several smaller ones.
0
 

Author Comment

by:cyc-01
ID: 38858231
Robocat,

For various reasons, we do not backup at the container level but at the filesystem level.   Meaning, if my filesystem looks like:

/FOO (3TB mount)
/FOO/data
/FOO/stuff


then I can just setup backup jobs (in netvault) as follows:

bkjob1 = /FOO/data
bkjob2 = /FOO/stuff

As such, I can achieve different backup windows/times/strategies while still using a 3TB container.

I might be over thinking this, I just don't want to get into a position later in which a 4TB automount is an issue because of some problem with NFS or similar.

Otherwise, I can't find any issues with this idea.

I am going to give you the points because you have addressed the issue and confirmed that it wouldn't be a liability.
0

Featured Post

Control application downtime with dependency maps

Visualize the interdependencies between application components better with Applications Manager's automated application discovery and dependency mapping feature. Resolve performance issues faster by quickly isolating problematic components.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

INTRODUCTION The purpose of this document is to demonstrate the Installation and configuration, of the HP EVA 4400 SAN Storage. The name , IP and the WWN ID’s used here are not the real ones. ABOUT THE STORAGE For most of you reading this, you …
this article is a guided solution for most of the common server issues in server hardware tasks we are facing in our routine job works. the topics in the following article covered are, 1) dell hardware raidlevel (Perc) 2) adding HDD 3) how t…
Learn several ways to interact with files and get file information from the bash shell. ls lists the contents of a directory: Using the -a flag displays hidden files: Using the -l flag formats the output in a long list: The file command gives us mor…
This demo shows you how to set up the containerized NetScaler CPX with NetScaler Management and Analytics System in a non-routable Mesos/Marathon environment for use with Micro-Services applications.

912 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

21 Experts available now in Live!

Get 1:1 Help Now