Link to home
Start Free TrialLog in
Avatar of projects
projects

asked on

Centos KVM - Best Practice Web Serving

I have a kvm host which started life as a web server.
A vm was created in order to move the web sites off the host. The plan was to keep the mysql server running on the host itself, allowing the web sites to connect to the host.

However, an interesting question has come up.

The host has much more memory than the vm's will have which is shared.  In terms of performance, what is the best practice for doing this? Should the mysql server and databases be moved onto the new vm or would it make more sense to keep it on the host?

The web sites, some 20 of them, are all low use. In some ways, it seems to make more sense to just keep them on the host, that the host might perform better than moving them to one of the vms being hosted by it.
Avatar of Dan Craciun
Dan Craciun
Flag of Romania image

I would use containers for this. One container for each site, and one container for the database.

Why containers and not full VMs: low overhead (5-10%).

This would future proof your sites. You can move them independently to other hosts, when/if the need arises.

You can also go full independence and have each container contain both the site and the database.

HTH,
Dan
While having all on the same host reduces latency, rising scalability bar requires to deal with latency (you just cannot have 1TB server serving 10000 users - what you will do when their numbers doubel?)
Avatar of projects
projects

ASKER

A container for each site? The sites will always be low usage so I figured if I could stick them all in one vm, that would be a nice manageable way to maintain those including adding and removing some.

I'd like to take advantage of servers unused resources too. I do need a few vms at least, one vm for example would be a complete mail server, again, pretty low use but important none the less. Then a couple more for remote developers to log in and do their coding work. Overall, other than these things, the server will be too idle.
I'd also like to use it's mysql server as an offload to another application on the same network.

I've never used containers so am reading up about that now. So far, as I understand it, it's a good way of compartmentalizing things without the full vm overhead but I don't have much time to learn a whole new technology so am not sure how it might apply to me.
It's not new and it's not complicated. As long as you have kernel support.

If you want a web interface, proxmox is a popular choice. It can manage both KVM machines and OpenVZ containers.
From what I am reading, I already have a container and not a vm?

http://pingd.org/2012/kvm-hardware-virtualization-on-centos-6-2-dedicated-servers.html
The article states; Once the container reports that it’s created you can see it’s status via:

Which I run and get...

# virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------
 4     prodweb-194                    running

Promox seems to be a complete operating system and this server is already built and configured.
You get a list of the containers on the system with:
vzlist -a

qemu is the virtualizer for KVM, AFAIK.

Docker is the best known example of Linux containers.

Proxmox is not an operating system. The original distribution is a modified Debian, but I think it can be installed over any distribution.
I read that it's open source, free to use but I can't find anything about installing it on an existing centos7 server.
Unless, it's supposed to be installed AS a vm?

Either way, it's still not clear to me what the best practice would be in my case.

I have a powerful server that is only serving up a dozen low use web sites.

I'd like to take advantage of the server by setting up some vms which I would allow devs to use for coding. Again, very low power, like 1GB of memory each, barely any CPU usage.

I'd like to take advantage of the hosts memory by using the mysql server but now I'm learning about containers.

You mentioned putting all the web sites into their own individual containers so I need to learn more about those to understand where the value of this would be.

In another question I have, I'm asking how I can get GUI vm management without having to install X windows on a production host.

Promox has now come up in this question :)
If you use containers then you should stop thinking in terms of Apache, MySQL, etc. You have an application that serves a purpose (in this case serving web pages). Kind of like an object, if you've been trained in OOP.
If it is self-contained (anything it needs is inside the container) then you can move it, give it additional resources (RAM, CPU cores - while it runs), take snapshots, clone it, etc. And anything you do to that app would only affect itself.

While this is true for full VMs also, containers run native, meaning they access the RAM and CPU and disk directly, on the limits you impose.  

For your current setup (a single server) and usage (mostly unused resources) containers are in the "nice to have" category.

If you have multiple servers and heavy used applications, then the benefits would be great: instead of having 10 KVM machines you would have 15 containers, for ex.
But why you need those small machines? Do you have easy way to cross from linux user apache to linux user mysql ? What problem you are trying to solve in general?
Installing OpenVZ on CentOS is fairly simple: http://www.jamescoyle.net/how-to/1376-install-an-openvz-server-on-centos

As for GUI's for OpenVZ containers, here is a list: https://openvz.org/Control_panels

Proxmox is not officially supported on CentOS, only on Debian.

There is a fork of Proxmox that is based on CentOS, but I have not used it and don't know how stable it is: http://opennodecloud.com/downloads.html
ASKER CERTIFIED SOLUTION
Avatar of Dan Craciun
Dan Craciun
Flag of Romania image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Doesn't sound like containers are something I need to look at yet.
So, my original question remains then.

I have a powerful server that is only serving up a dozen low use web sites.
The server is set up as a kvm host at this point so that I can create some secluded small resource individual vms for developers to work on.

The host is currently serving web pages for about 15 sites or so, all low use. I'd like to move the web sites to their own environment so they too are basically secluded from the host.

However, I'm not sure of the best method of doing it so the question is if I move those web sites to a vm, would it make more sense to give the sites access to the hosts mysql server which has more memory or would it make more sense to add a little extra memory into the vm to allow for a local mysql server.

Finally, I'd like to use another mysql server to offload another application by putting some of its tables on another host.
If I put the mysql server into the web sites vm, that doesn't sound as effective as leaving it on the host for everything.
You can isolate websites by mod_userdir, what you are doing is trying to overcomplicate your setup.
Not sure what your reply means because no, I'm not trying to overcomplicate anything. I'm only asking/responding to suggestions made in this question.

My question remains the same and I've yet to get an answer which explains it well enough to know what would be best. A vm with the web sites on it, keeping the web server on the host or something else. I have no issues simply keeping the web sites and mysql on the host, then creating vms for the things we need but wanted to better understand how others do it.
Either you run everything on the same machine or everything in virtual machines and keep host machine as clean as possible.
But I know this already. This doesn't make it clear which is the better method of doing it. The host has the memory, the vms have what ever they are allotted. Adding lots of memory to a vm so it can host mysql, taking it away from the host doesn't seem to make sense.

It seems to make more sense to leave the myself server on the host to start with. But then that leave the web sites. Is it best to keep them on the host as well instead of creating a vm to host them, using up more resources just for the web sites?

Anyhow, that's what I'm trying to better understand by asking this question.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Sure but 'plenty of resources' becomes degraded either way if you don't manage those resources.
Yes, I understand the benefits of using vms but this question isn't about that so much as how to best use the host.
>>how to best use the host
If you want the "best" utilization, don't go virtual.
Any type of virtualization has overhead and then you don't have the "best" resource usage on the host.
LOL, ok, so, you've basically simply responded in a roundabout way :).
Visualization has it's place and of course it takes host resources, we all know that and is why we build a server with lots of memory and cores. However, the question remains the same.
While I am very aware of system resources and that running vms in single function ways waste those resources, there are many times where vms are in fact better than installing everything onto one server.

I just wondered what others think about this particular situation. Seems to make a lot more sense to simply leave the web sites and mysql on the host where both can take advantage of as much memory/space as the host can offer.

Migration, backups, expansion, fail over/reliability are just some of the reasons to use vms but so is when you need to give devs their own work spaces without giving them access to the main server for example. In my case, this is why I need to use vms on this server, because of the various projects I have where I do not want devs to mix together, needing them to be separated from each other in every way. Using vms is the ultimate way of giving them everything they need without making any potential mistakes by giving access to the host.

I've been using virtualization since the early days of Xen and found it very useful to learn about. Eventually, I was able to get rid of a large number of racks of servers by consolidating them into more powerful hosts and ended up with just a couple of BladeCenter chassis to house everything but storage.

Yes, virtualization definitely has its place.