Shoutcast Problem with OpenVZ - High Memory Usage


We are running a small VPS hosting service using hyperVM & OpenVZ using the Centos-5-i386-hostinabox52 templates.

We are experiancing an issue where shoutcast will use an extraordinary amount of memory, increasing along with the MaxUser setting. For instance, 1 single shoutcast server with a set limit of 500 MaxUsers (no active stream) may consume between 80-200MB of memory. Obviously this is an insane amount for a single shoutcast server.

An associate and i have been researching this over the last week and have stumbled upon a few areas, however we have been unable to solve the problem.

The following thread over at the OpenVZ forums seems highly relavent to the problem:

From this, we have tried setting the CpuCount to 1 with no avail.

Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

>  Obviously this is an insane amount for a single shoutcast server.

I'm not so sure that 80-200Mb is a huge amount for 500 Users (even not active), why do you think it's insane?

What shoutcast software are you using? Is there a hardware scalability guide for this software?
Extreme43Author Commented:
Thanks for the response!

We are using the shoutcast server 1-9-8 for linux with glibc6, from our experiances in dedicated environments shoutcast uses minimal memory and 200MB (in our personal experiance) is an enourmous amount.

See the MaxUsers is simply a limit and should "normally" have no effect on the memory used/

I have 15 servers on our dedicated Fedora testing/development machine running with an 800 MaxUser limit per server. Memory used is approximately 5MB.

Unfortunately the shoutcast DNAS is not documented and has barely been maintained over the last 4-5 years. It is also the most popular software in the industry so moving elsewhere is not feasible.

This problem is common with OpenVZ and as we are aware also XEN.
However!, we have recently discovered the 2.6.18-53.1.6.el5xen out of the kernel-xen-2.6.18-53.1.6.el5 rpm of CE5 works fine and has no issues. Unfortunately due to the amount of memory assigned to the host and the way XEN works, using this is out of the question (expenses, downtime ect ect).
I hardly can help you, because all symptoms points to the way the OpenVZ allocates memory for your server and there is nothing to do with it.

But I'll try your configuration locally. It takes some time.
Please provide your shoutcast server config and the way you are measuring allocated memory.
The Ultimate Tool Kit for Technolgy Solution Provi

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy for valuable how-to assets including sample agreements, checklists, flowcharts, and more!

I found the hardware requirements for shoutcast, it requires only 14kB of memory for every listener.

I've just run this server and on on usual kernel it takes 52Mb of virtual memory and 1.7Mb resident memory for MaxUsers=500. It uses non-standard 'clone' syscall to create thread. I'll try to find out the reason of the problem tomorrow, when I install OpenVZ and when it will be more free time.
Extreme43Author Commented:
I have included the server configuration we have attempted, noting we have tried with and without CpuCount.

I have heard of problems regarding shoutcast and NPTL but these were from a few years back. It would be interesting to run shoutcast without NPTL to see how it responds but i would be unsure of how to do so correctly.

Open in new window

Extreme43Author Commented:
We use "cat /proc/user_beancounters" to check the memory usage.

No Shoutcast servers:
resource                     held              maxheld              barrier                limit
 privvmpages                  6278               262135               256000               262144
 oomguarpages                 3314                12452               131072           2147483647

1 Shoutcast server with 800 MaxUser Limit:
resource                     held              maxheld              barrier                limit
privvmpages                 26130               262135               256000               262144
oomguarpages                 3637                12452               131072           2147483647

As you can see, when the server is started over 70MB is "reserved" while only 14MB memory is being actually used in total. When we kill sc_serv, the reserved amounts are released.

We attempted to start ~20 shoutcast servers and we see  the limits reached and the failcnt start to rise, in turn we also see shoutcast log the following:
<02/03/08@17:32:15> [main] failed to alloc (29772288) bytes for clients

That is approximately 28MB that shoutcast is trying to assign (possibly per thread?).
> We attempted to start ~20 shoutcast servers and we see  the limits reached and the failcnt start to rise, in turn we also see shoutcast log the following:
<02/03/08@17:32:15> [main] failed to alloc (29772288) bytes for clients

Did you ever tried to run 20 simultaneous shoutcast servers on a bare Linux (not inside OpenVZ) on the same machine with the same config (except port/ip)?

> That is approximately 28MB that shoutcast is trying to assign (possibly per thread?).

I installed OpenVZ with fedora-core-5-minimal template and on 2.6.18-53.1.4.el5.028stab053.4 kernel.

After some experience with sc_serv I found it's a normal amount of memory and that the problem is not related to NTPL or number of threads.
Exactly the same amount of memory is consumed under non-virtual Linux. The problem is in your memory measurement method.

In your last post, sc_serv had used:
26130-6278=19852 pages or about 78 Mbytes

In my OpenVZ without sc_serv:
Version: 2.5
       uid  resource           held    maxheld    barrier      limit    failcnt
      101:  kmemsize         671686     998268   11055923   11377049          0
            privvmpages        2289      17836      65536      69632          0
            shmpages            640        656      21504      21504          0
            physpages          1136       1514          0 2147483647          0
In my OpenVZ with sc_serv (800 max users):
Version: 2.5
       uid    resource           held    maxheld    barrier      limit    failcnt
      101:  kmemsize         791777     998268   11055923   11377049          0
            privvmpages       17831      17836      65536      69632          0
            physpages          1478       1514          0 2147483647          0

So the usage is:
17831-2289=15542 pages or about 61 Mbyte

In my bare linux sc_serv with 800 users takes (I've used 'top -p `pgrep sc_serv`'):
 2270 root      15   0 63596 1768  540 S    0  0.2   0:00.01 sc_serv

Virtual memory (the same as privvmpages in OpenVZ) used in that case  is 63596 KB or about 62 MBytes.

Also I traced sc_serv, how it uses NTPL. It creates only one thread, that corresponds to 1 kernel thread, that's why kmemsize also increased (on about 20KBytes) in OpenVZ when shoutcast server is running.

And yes, it's quite memory consuming application. Regardless it uses only 1.8MB of physical memory (RES field in 'top' command output, or 'physpages' parameter in OpenVZ), it allocates much more even in bare OS.

20 x OpenVZ servers on a singlemachine with sc_serv in each is worse than 20 x sc_serv processes under single OS control because of OpenVZ have memory overhead + additional copy of libc in each virtual machine. So you may run more shoutcast servers on a single machine, but too much.

I see no way to decrease amount of memory consumed by sc_serv, but you may add more swap space or physical memory to increase number of available vmpages.
Extreme43Author Commented:
I see what you mean, I have 30 shoutcast servers (800MaxUser) running on a dedicated bare fedora install using minimal physical memory but still allocating the large amount of virtual memory. This virtual memory as i understand is stored on the harddrive in a normal environment and when needed it is allocated as physical memory.

So while the shoutcast server is inactive it uses ~1MB of physical memory and allocates ~70MB ready for when users connect, correct? What i don't understand is why this is not occuring on the VPS and why the VPS cannot cope with this.

Your time is much appreciated!
> his virtual memory as i understand is stored on the harddrive in a normal environment and when needed it is allocated as physical memory.

Not exactly. Virtual memory is the total amount of pages taken by the process. It includes:
- the process image space
- shared (among other instances) library space, occuped by libc, libpthread, other dependand libraries
- allocated by process space from OS 'heap'

How to measure the amount of space, taken by the 'sc_serv' process + all dependent shared libraries? Just use MaxUsers=1, run the process and see VIRT size in 'top' it takes about 32Mbytes, most space is used for shared libraries.

What happens when we run more processes in the same OS withing the same container (either in a single OpenVZ or in a plain OS)? All such instances _share_ the same memory pages of libc, libpthread and use only extra amount for the process itself.

What happens when we run multiple processes in multiple containers (it may be 20 Xen or 20 OpenVZ machines)? They have separate copies of libc, libpthread, ... as well as the amount of memory for the process itself

> So while the shoutcast server is inactive it uses ~1MB of physical memory and allocates ~70MB ready for when users connect, correct?

No. The memory is already allocated after the process has started (exactly for this reason MaxUsers parameter exists, to calculate the amount). This memory includes physical memory + swap storage. How to divide and where to take this memory: from physical or swap is up to Linux kernel. If no reads/writes are performed it goes to swap. swap and physical memory is interchangable.

The main idea is your total virtual memory is equal to swap space + physical memory space and there is no difference between swap and physical memory except then in access speed. So you may safely increase your swap and in such a way increase possible number of virtual machines inside a single box. The other way is to install 32GB RAM everywere _and_ to increase swap. So the 'paging' probability will be minimal.


Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Extreme43Author Commented:
Thanks for the help, it looks like it all comes down to how shoutcast allocates memory and how OpenVZ handles memory. We will be transferring the server over to a XEN which handles shoutcast appropriately.

Your efforts are greatly appreciated.
Thank you for points.

You where knew the answer before asking...

Xen makes even more system overhead then OpenVZ. So I'm not sure that in the same hardware+RAM+swap it would be able to carry more shoutcast servers. If you succeed, it would be nice to document it in a post here.
Extreme43Author Commented:
Well I had an idea of the problem, your posts clarified and detailed these. I was hoping there was a fix or workaround available but it doesn't seem there will be for a very long time.

I understand that with OpenVZ and shoutcast that the physical memory is low but the way the allocation works really screws everything around in the sense that other applications cannot allocate memory and these applications and services start failing.

We understand Xen is a bit more resource hungry but the kernal (in my opinion) sensibly utilises the memory and swap spaces and we will be able to run shoutcast as if in a normal environment. We will be testing Xen with shoutcast in the next few days and i will post the results here.
Extreme43Author Commented:
To clarrify, we have installed XEN with centos and launched 100 shoutcast servers running 999MaxUsers under a VPS configured with 256MB ram and 512SWAP. The memory consumption was about 100MB as expected and the SWAP was not touched until the total VPS memory consumption reached around 200MB with additional services in which approximately 70MB was allocated to swap, launching further servers would share the memory with SWAP until we launched an additonal 100 SC servers which at this point the server become extremely slow but still managable with no failing processes.

The resource usage between OpenVZ and XEN was comparisable and acceptable.
It looks like we will be moving our server to the XEN platform.
Hi, Extreme43. Good to know.
It seems Xen manages memory much better.
Thank you for your feedback.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.