I recently inherited a MySQL dbase that is hosting data for an Access Front end using ODBC. In essence the Schema consists of about 8 large tables (6 Million rows +) and an number of secondary tables. The data is stored thus:
Let's say I had a record that held name, City, State, Zip
One would expect a single row with each piece in a column formatted to hold that data, indexed with a unique primary key.
Using that same model, this database assigns an index to the type of data being stored (1=name, 2=City,3=address,4=state) and an index for the root context of the dataset (let's say its the name)
Then stores each piece of data on its own row, referencing the data type index and the root index. Each row has a column to hold different data types (Double, Date/Time, Varchar, etc)
So in order to retrieve the complete record, a query must join the table to itself 4 times on the root context id and then pull each column.
With this schema, anything one does is going to be slow, of course. I have been tasked with tuning the database as best as possible to address the ever increasing lag time on queries.
I have plenty of resources available. CPU usages stays below 25% and memory below 40%. Attcahed is the my.cnf.
I don't even know where to start on this. Discuss and thank you!