Scaling MySQL for 50,000,000,000 requests

We have to serve 50,000,000,000 requests per day, our app can handle that, but we're not sure if our database skills are there to pick this up. We have a database server with 256gb of ram and 512GB SSD RAID using percona edition. Some of the data needs to stay on the database for 30 days, the transaction table where all requests are stored. what approach do we need to scale this horizontally? each endpoint requests runs about 4 queries...
Mario BernheimCTOAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Granted it is for a site that runs Windows, so there is nothing specific for MySQL, but for general ideas, the Stack Exchange Architecture is worthwhile reading up on.

In general with MySQL, the more IO bandwidth you  can provide to the database, the better, so raither than a single RAID5/6 for database and logs, have databases (or if file per table, tables) on RAID1/RAID10 sets, and logs on other RAID1/10 sets, ideally on PCIe SSD rather than SATA/SAS, and have buffer pool as big as you can manage.
I would recommend to do a small test. Simply subscribe to some cloud service with MySQL database, setup your database and do a stress test. This will show what hardware sizing you'll need for given number of requests. Cloud is great because you may easily update the number of CPUs, RAM, storage, etc. And it will be cheap because you'll need it for a short time.

After the test you'll find your current hardware as insufficient most probably. Did you calculate what will be the size of data? If each request would generate just 1 byte of data which you need to store for 30 days then you would need much bigger drives... You should also think about some space for backups or even replication etc. What is the accepted downtime for your application? Did you calculate the necessary bandwidth for all the planned requests?
50,000,000,000 requests over what time frame?
Similar queries have the same response data from cache.
Type of requests, reads, writes?
How much data in DB, rate of projected growth of data in db?
Price Your IT Services for Profit

Managed service contracts are great - when they're making you money. Yes, you’re getting paid monthly, but is it actually profitable? Learn to calculate your hourly overhead burden so you can master your IT services pricing strategy.

Gerald ConnollyCommented:
50GigaRequest per day  OR 580k transactions  per second
So at that rate even if you stored only 10 bytes per transaction that’s your 512GB used up in a day
So your storage system needs to be capable of at least 2.5 million IOPS  if your 4 IOPS per transaction is correct and have a capacity of 15TB for every 10 bytes stored
Of your 50G trans/day, what is the distribution of Select, Insert, Update, Delete?
Mario BernheimCTOAuthor Commented:
Thanks everyone for taking the time to review my question, the idea is to be able to shoot for 50,000,000,000 transactions and know that we can handle that with whatever architecture or setup we decide to go with. since this is a SaaS platform we want to make sure that we are comfortable.

would the right approach be scaling up or scaling out?
No, you cannot be comfortable with a database server with 256gb of ram and 512GB SSD RAID.
Scaling-up or scaling-out is definitely a solution. Use whatever your software supports and what your money allow.

First of all you should know amount of data processed/stored/transferred by the app then you may make further decisions.

The hardware necessary for this app will cost hundreds of thousands USD. I would also expect licensed OS and MySQL. Every hw/sw seller will be happy to calculate necessary figures for you.
Contact Peter Zaitsev and ask him.  He's my go-to expert for MySQL performance questions.
David FavorLinux/LXD/WordPress/Hosting SavantCommented:

1) Without knowing exact type of load, no way to guess.

2) With this level of transactions, even just straight SELECTs (reads), I'd strongly suggest you run MariaDB (working MySQL) on a LAMP Stack.

Because, trying to tune your workload on LAMP tends to be straight forward, with tools like mysqltuner being commonly available.

Trying to run this type of load on Windows will likely create substantial time drain trying to tune your system, with very little effect.

3) The single best tuning tip for high throughput database systems is use LAMP + relocate /tmp into tmps + change the temp file storage engine to match your work load (InnoDB for heavy reads + MyRocks for heavy writes).
Russell FoxDatabase DeveloperCommented:
Honestly, at some point you're better off switching to NoSQL like MongoDB or DynamoDB: they're designed for huge scale-out.
I wouldn't persist the transactions as Inserts to one of your tables.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Mario BernheimCTOAuthor Commented:
thanks everyone
OK, it seems the problem is solved but I don't understand how the accepted solution helped. Could you be please more specific how your 512 GB SSD consumes 50G requests per day?
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.