Link to home
Start Free TrialLog in
Avatar of Brian B
Brian BFlag for Canada

asked on

Netapp Aggregate - Division of data between volumes

I have an aggregate with 8TB of total space that is used for a vFiler. I received some alerts that more than 90% of the aggregate is used. However, looking at the three volumes that make up this aggregate:

V1 - 1TB, 25% used
V2 - 1TB, 25% used
V3 - 8TB, 90% used

It appears that the data is not being spread across the available space and instead all the data is going on to V3.

I have inherited this Netapp like this, so I don't know if this is normal or not. I would assume that the Netapp OS would put more data on V1 and V2 before letting V3 fill up, but it doesn't seem to be this way so far? Is it bad or against best practice to let the aggregate volumes be different sizes like this?

Thanks!
Avatar of Paul Solovyovsky
Paul Solovyovsky
Flag of United States of America image

The vfiler act as virtual Netapp controllers and do not distribute data.  The volumes that are attached to these vfilers sit on aggregates.  On which aggregate the corresponding volumes are being stored in?  The volumes may be in different aggregates which are independent of each other and do not share disks or a way to distribute data unless this is on the application level.
Avatar of Brian B

ASKER

It's one aggregate. If I look in the information about the aggregate, it shows me those volumes.

Although it might help, unfortunately I can't provide a screen shot for security reasons.
It may be a single aggregate but you have multiple volumes in the aggregate.  These volumes may be using the capacity in the aggregate.  

1.  How large is your aggregate
2.  How much volume storage are you using?  Ensure that you have data+snapshots in the equation
3.  Ensure that the volumes are thin provisioned, otherwise the 1TB volume that is only 25% full will use the entire 1TB
Avatar of Brian B

ASKER

1. Aggregate is 8TB
2. Volumes are being used as above. Volumes aren't used for anything else besides this aggregate.
3. Graph shows only 25% used on the volumes.
So let's do the math:  The following volumes are on the aggregate and usage is as listed.

90% of 8TB =7.2TB
25% of 1TB = .25TB
25% of 1TB = .25TB
Total = 7.7TB which equal to 96.25% used on the aggregate which is what the Netapp is reporting.  You should not go over 90% as best practice.  If the aggregate is storing the volumes attached to the vfilers then you're running out of space.

What
Avatar of Brian B

ASKER

So I suspect that increasing B and C won't have any affect though. So it makes less sense to keep increasing A when B and C are not being fully utilized. IS there a way I can fix that?
SOLUTION
Avatar of Paul Solovyovsky
Paul Solovyovsky
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Brian B

ASKER

I didn't realize this question was still open, but I think I have found a better explanation as to why this is happening. Looking at the volumes in the aggregate it shows this:

Name, available space, total space
V1, 320GB, 1TB
V2, 320GB, 1TB
V3, 320GB, 8TB

In other words, the NETAPP controller is trying to balance the available space across the volumes, not the % used, despite the fact they are different sizes.

As I said, these are the only volumes in this aggregate.
Avatar of Member_2_231077
Member_2_231077

Are you sure V1, V2 and V3 aren't disk groups?
The Netapp isn't trying to balance anything.

There's just one pool of free space that is being shared by all the volumes. This free space is handed out on a "First Come First Served" basis.

Let's say you have 700GB left. Any application/user that writes 700GB to either of the 3 volumes will cause the free space to be used up. And will cause all 3 volumes to be full at the same time, which will probably crash the applications that are using them and data to be lost.

The free space you see in each volume isn't guaranteed because it's the same shared free space you see in the other volumes. Don't try to add these numbers up, it will not make sense.

That being said, if your aggregate is really 90% full, you're in a dire situation. Because of the way Netapp works, you're likely to suffer performance loss with so little free space. Also, when space finally runs out, all volumes will be affected at the same time. Applications using the volumes will probably crash or stop working and data may be lost.

So I suggest you take action urgently.
Avatar of Brian B

ASKER

So what action to take? Will increasing the size of the smaller volumes actually change anything, since it's the larger of the three that is 90% full? What should be done to ensure the volumes are balanced optimally?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Brian B

ASKER

"move the largest volume to a larger aggregate, if available"
There are unused disks available, but just to clarify... I thought that the aggregate was made up of volumes? So the size of the aggregate is already the size of the volumes? We then have a vfiler pointing to that aggregate to store data.
> I thought that the aggregate was made up of volumes?

I thought you were talking about RAID groups rather than volumes which is why I posted 41572694 previously.
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Brian B

ASKER

I was looking at the problem the wrong way around. I see now that volumes are not limited by the size of the aggregate in the way I thought. Adding disks solved the immediate problem by making the aggregate bigger, which gave more room for the volume to grow.