Want to win a PS4? Go Premium and enter to win our High-Tech Treats giveaway. Enter to Win

x
?
Solved

Size of NFS exports with a EqualLogic PS6100 (w/ the FS7500 NAS heads)

Posted on 2013-02-02
2
Medium Priority
?
689 Views
Last Modified: 2013-02-05
Hello,

We have a EqualLogic PS6100 w/ the FS7500 NAS heads and we have created  a 10TB NAS container.


We have about 3TB of data to export via NFS.   Our clients are all Debian 6.0.

We use automounts on the Debian systems and it can see the FS7500 w/out issue.   The question is how the data should be presented.

Currently, we have an export container called foo and it is 3TB.  

So, the clients just go to cd /foo and can see all of the data.

Clearly, the data is organized into directories.  Example:  foo/tools, foo/departments etc

My question is whether it makes more sense to create exports like this:

/foo/tools
/foo/departments

So instead of having a 3TB export we would have several exports of smaller sizes.   The automount can still see these exports and so that is not a problem.  Example:

tools export container would be 1TB, deparments would be 800MB etc


I am just wondering if there any advantages/disadvantages to breaking up the data?  Or is it just a matter of preference/ease of management.

Thanks
0
Comment
Question by:cyc-01
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
2 Comments
 
LVL 22

Accepted Solution

by:
robocat earned 1200 total points
ID: 38849207
There are several things you need to take into consideration.

E.g. how do you make backups ?

-it may be easier to backup several smaller containers instead of a big one within your backup window
-different types of data may have different backup schedules or retention times.

On the other hand, one big container will waste less free space than several smaller ones.
0
 

Author Comment

by:cyc-01
ID: 38858231
Robocat,

For various reasons, we do not backup at the container level but at the filesystem level.   Meaning, if my filesystem looks like:

/FOO (3TB mount)
/FOO/data
/FOO/stuff


then I can just setup backup jobs (in netvault) as follows:

bkjob1 = /FOO/data
bkjob2 = /FOO/stuff

As such, I can achieve different backup windows/times/strategies while still using a 3TB container.

I might be over thinking this, I just don't want to get into a position later in which a 4TB automount is an issue because of some problem with NFS or similar.

Otherwise, I can't find any issues with this idea.

I am going to give you the points because you have addressed the issue and confirmed that it wouldn't be a liability.
0

Featured Post

What is SQL Server and how does it work?

The purpose of this paper is to provide you background on SQL Server. It’s your self-study guide for learning fundamentals. It includes both the history of SQL and its technical basics. Concepts and definitions will form the solid foundation of your future DBA expertise.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Data center, now-a-days, is referred as the home of all the advanced technologies. In-fact, most of the businesses are now establishing their entire organizational structure around the IT capabilities.
Google Drive is extremely cheap offsite storage, and it's even possible to get extra storage for free for two years.  You can use the free account 15GB, and if you have an Android device..when you install Google Drive for the first time it will give…
Connecting to an Amazon Linux EC2 Instance from Windows Using PuTTY.
How to Install VMware Tools in Red Hat Enterprise Linux 6.4 (RHEL 6.4) Step-by-Step Tutorial
Suggested Courses

636 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question