Solved

Size of NFS exports with a EqualLogic PS6100 (w/ the FS7500 NAS heads)

Posted on 2013-02-02
2
668 Views
Last Modified: 2013-02-05
Hello,

We have a EqualLogic PS6100 w/ the FS7500 NAS heads and we have created  a 10TB NAS container.


We have about 3TB of data to export via NFS.   Our clients are all Debian 6.0.

We use automounts on the Debian systems and it can see the FS7500 w/out issue.   The question is how the data should be presented.

Currently, we have an export container called foo and it is 3TB.  

So, the clients just go to cd /foo and can see all of the data.

Clearly, the data is organized into directories.  Example:  foo/tools, foo/departments etc

My question is whether it makes more sense to create exports like this:

/foo/tools
/foo/departments

So instead of having a 3TB export we would have several exports of smaller sizes.   The automount can still see these exports and so that is not a problem.  Example:

tools export container would be 1TB, deparments would be 800MB etc


I am just wondering if there any advantages/disadvantages to breaking up the data?  Or is it just a matter of preference/ease of management.

Thanks
0
Comment
Question by:cyc-01
2 Comments
 
LVL 21

Accepted Solution

by:
robocat earned 300 total points
ID: 38849207
There are several things you need to take into consideration.

E.g. how do you make backups ?

-it may be easier to backup several smaller containers instead of a big one within your backup window
-different types of data may have different backup schedules or retention times.

On the other hand, one big container will waste less free space than several smaller ones.
0
 

Author Comment

by:cyc-01
ID: 38858231
Robocat,

For various reasons, we do not backup at the container level but at the filesystem level.   Meaning, if my filesystem looks like:

/FOO (3TB mount)
/FOO/data
/FOO/stuff


then I can just setup backup jobs (in netvault) as follows:

bkjob1 = /FOO/data
bkjob2 = /FOO/stuff

As such, I can achieve different backup windows/times/strategies while still using a 3TB container.

I might be over thinking this, I just don't want to get into a position later in which a 4TB automount is an issue because of some problem with NFS or similar.

Otherwise, I can't find any issues with this idea.

I am going to give you the points because you have addressed the issue and confirmed that it wouldn't be a liability.
0

Featured Post

What Security Threats Are You Missing?

Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily.

Join & Write a Comment

Note: for this to work properly you need to use a Cross-Over network cable. 1. Connect both servers S1 and S2 on the second network slots respectively. Note that you can use the 1st slots but usually these would be occupied by the Service Provide…
Usually shares are where we want them for our users and we tend to take them for granted. There are times, however, when those shares may disappear causing difficulty for your users. One of the first things to try is searching for files that shou…
Learn several ways to interact with files and get file information from the bash shell. ls lists the contents of a directory: Using the -a flag displays hidden files: Using the -l flag formats the output in a long list: The file command gives us mor…
Connecting to an Amazon Linux EC2 Instance from Windows Using PuTTY.

760 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

20 Experts available now in Live!

Get 1:1 Help Now