Link to home
Start Free TrialLog in
Avatar of Daryun
Daryun

asked on

Best Way to Deploy a Heavy PHP/MySQL Website in AWS With Auto Scaling and Load Balancer Enabled?

AS the title says we're looking for the best way to deploy a big PHP website in AWS that would auto-scale whenever we have more visitors than normal. Another reason why we want to move to AWS is to make our existing website more reliable and to have an uptime closest to 100% as possible.

Details about our existing website:
- PHP files including template files and images is already over 2.5GB
- MySQL DB size is over 500GB
- Normally around 2 to 3 million unique visitors per month
- Right now we're currently using a dedicated server hosted in Rackspace

Here are my questions:
1) What is the best way to deploy this type of website to AWS to have the lowest downtime as possible and to minimize data loss? What do you recommend we do?
2) We regularly do updates on our website PHP files, how do you update the PHP files currently running on multiple EC2 instances? I'm pretty sure you don't do it one by one.
3) Is Amazon RDS Multi-AZ deployment the best way for us to have a very fast and reliable DB? Is it the most reliable way to host your DB in AWS?
4) Is there an optimized AMI specially made for a PHP / Apache websites that I could use? Or just stick to Amazon AMI and install what I need?
Avatar of Aaron Tomosky
Aaron Tomosky
Flag of United States of America image

My big question is how are your reads and writes done? Elastic beanstalk is probably the way to go, but if you could scale to even more visitors then you should consider your logic flow to use multiple read only web servers.

As to code updating, for aws use Amazon code as your source code repo and you can push from there. At a high level if you use multiple web servers, you update the "offline" ones, then rotate them into production.
One visitor per second is not that burdening.
There are plenty of optimisations to do in mysql and in apache before rise the money bar above your head.

Lets look from the users side:
Apache 2.2 -> 2.4 ? prefork -> worker? Apache -> nginx? mod_php -> fcgid ?
Do you cache any content in RAM? And compressed representations thereof?
Do you use any of PHP op caches?
Any memory caches backing DB?
And do you run mysqltuner or similar scripts at least monthly?
Avatar of Daryun
Daryun

ASKER

Hi Aaron, what do you mean by read and write done? PHP files are used to process various requests done on our site and most of the data are on our MySQL DB. Right now both PHP files and MySQL DB are hosted in the same server but on different disks.

At a high level if you use multiple web servers, you update the "offline" ones, then rotate them into production.
Isn't this very time consuming? What if you have hundreds of running EC2 instances, that would take you the whole day to rotate all those instances offline for updating then put them back on again. And if you do it this way then some EC2 instances would be running the updated codes while the rest that are in queue to be put offline for updating are still on the old codes. I'm pretty sure there is a better way of doing this that what you are suggesting.
Avatar of Daryun

ASKER

Hi gheist,

What are you trying to say? We should stay with our current server and optimize it more rather than move to AWS? How can you tell that our current server is not properly optimized already? Our current server with Rackspace is pretty old, they can't get any parts for it any longer to do an upgrade. We can't even increase our RAM (currently at 4GB only). We need to move to a new server if we want to upgrade any part of our server. We're thinking, instead of moving to a new server we're going to AWS instead where we could request whatever resources we need whenever we want it.
At your level of traffic I don't think You need to worry about multiple web servers let alone hundreds. You should be fine with RDS and one web server that scales. Make site1 and site2  one is live and one is staging. Update staging, test, swap bindings. Depending on how you develop, you may have more sites for branches, QA, etc...
Avatar of Daryun

ASKER

Hi Aaron, not to be rude but do you have experience working on AWS infrastructure? Specifically one that uses autoscaling and with multi EC2 instances running at once? One of the main reasons why we're planning on moving to AWS is to have an uptime closest to 100% as possible, and to achieve that you need to have at minimum 2 running EC2 instances at different availability zones. Correct?

Forget about a hundred running EC2 instances, what about just having 4 or just 6? Still updating those one by one is not the best way to go about this I'm sure. That is just a waste of time.
>Forget about a hundred running EC2 instances
Indeed I wonder why you think so many instances will be started, are you thinking of one per concurrent user?

> just having 4 or just 6? Still updating those one by one is not the best way to go about this I'm sure.
What makes you so sure, what do you think does it need to update an instance?

How do you deploy new releases currently, ie what are your build and deployment tools? How long does it take, that you think upgrading 4 to 6 servers is taking way to much time?

Bye, Olaf.
Avatar of Daryun

ASKER

Hi Olaf,

>> Indeed I wonder why you think so many instances will be started, are you thinking of one per concurrent user?

I am thinking how are big companies who have hundreds of instances running at once if not thousands do their updating, we would like to do the same if anyone here knows how they do it. Why would you do it the hard way when there are easy ways to update multiple running instances like how the pros do it?


>> How do you deploy new releases currently, ie what are your build and deployment tools? How long does it take, that you think upgrading 4 to 6 servers is taking way to much time?

We only have one server to update right now so updating it is not a problem. I'm not saying updating 4 or 6 running instances would take too much time, what I am saying is that it is a waste of time to manually do an update one by one when obviously there are better ways to do it. Lets say you need 6 running instances at once, how do you update this? Turn on one more instance first then turn off one, then turn the updated instance back on then turn off another one and so on? If that is how you guys do an update then there would be running instances that are updated while the others are not, which is I'm pretty sure not a good way to run a production website...
ASKER CERTIFIED SOLUTION
Avatar of Olaf Doschke
Olaf Doschke
Flag of Germany image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
The first step towards atomic parts is moving mysql to different machine.... and that fcgid...
> atomic parts

Indeed I already assumed at least a server image for your application code and one for the database server.

It's typical you connect to a localhost db at hosted websites, such a thing surely isn't scalable as there is no such thing as instant mirroring. The typical thing to do to scale up database read needs besides caching quasi static content is to have replicating clusters. The static nature of code/executables makes it a simpler thing to scale by simply starting an image of an application server.

Bye, Olaf.