[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 351
  • Last Modified:

PHP/MySQL creating new table (realtime) when new day starts

I've never done this before, but I would like to have my php script create a completely new table for each new day as there will be several hundred thousand rows added daily

do i need to create a cron job to do this or can it be done in realtime with php at the first request of the day?

if db does not exist then create db else add the row

basically :)
0
mcainc
Asked:
mcainc
  • 6
  • 5
  • 3
1 Solution
 
tigin44Commented:
You can do that as you mentioned.. But do you really need to create a table for each day? Several thousand rows for each day is not a much load for a table. By using indexes you can reach them in a very short time. My advice is that you should think this...
0
 
mcaincAuthor Commented:
it is several hundred thousand rows... like 300,000 - 400,000

i tried before & after a week CPU consumption gets extremely high when reading the data and calculating stats
0
 
tigin44Commented:
Another solution may be making an archive table and at end of the day say at 24:00 copy the values from the original table to the archive and thendelete them from the original table..
0
What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

 
mcaincAuthor Commented:
oh i see like a cron script to move the data out of this table into another... based on the timestamp?

this may work, however an issue lies with the unique id (auto number)... i want this id to start at 1 every day (i am base encoding this row id and passing a short string to represent it in a shortened URL, i am trying to keep this string as short as possible)
0
 
tigin44Commented:
I see... at time you move data to the archive table and delete from the original one you can also reset the id field to 1 again by using  "reset identity field".
0
 
LordgobbledegookCommented:
What sort of indexing do you have on your MySQL table?  Something doesn't seem right if your CPU load is increasing significantly after only a few million rows.  Is the nature of your data more inserts, or reads, edits, etc?
0
 
mcaincAuthor Commented:
the only index i have is the auto number id field...

there are definitely more inserts than reads, the reads are done by me here & there when i view my stats page & have to make some graphs with the data
0
 
LordgobbledegookCommented:
From your description the inserts are working well.  The problem is reading and processing the data (assuming you are doing it in large chunks).  You should try adding some indexes on appropriate fields.

As an example, add an index to a date field.  You can then work out your stats based on a date range and MySQL will utilise the index instead of having to go through every record.  Indexes are definitely your friend and are a must for any large database :)

The catch is that every time you insert, edit, or remove data the index has to be updated.
0
 
mcaincAuthor Commented:
i see, i never really understood how indexes worked

when you say after modifying the table, the indexes have to be updated... how do i update them? do i just re-define the index?
0
 
LordgobbledegookCommented:
Once created the indexes will update automatically.  So there is nothing you need to do :)
0
 
mcaincAuthor Commented:
oh i see what you're saying, there is a slight performance trade for having indexes? i probably wouldn't notice it

the server is pretty beefy, it would just take very long to serve up data after there were a couple million rows

i'm going to give this a shot for now on the existing data that i have

i still do need a way to archive this data because i am base encoding the id number when passing out urls & then decoding updating the appropriate row when clicked - it works up to 8 digits the way i need it to
0
 
LordgobbledegookCommented:
A very slight performance loss for any writes to the database.  This includes inserts, edits, and deletes.  Indexes also require some disk space but quite likely that this would not be an issue.

The benefits is a very nice performance boost when reading data, particularly when based on indexed fields.  Due to the above negatives, you would only index fields that you search on.  If you do not search or sort by "Fax Number" then don't add an index on this field.  However, you may very commonly search on "Surname" so definitely index this one.

For archiving then you really need to run a cron job.  May I suggest...

When you add a new row then insert it into 2 tables.  Then schedule a script to TRUNCATE your first table every day at say midnight.  The second table will have everything in it all the time.
0
 
mcaincAuthor Commented:
makes complete sense! you're the man... thanks!
0
 
LordgobbledegookCommented:
Just be aware that if you have a primary key being autoincremented in the first and second tables, then this key may differ.  After all, the first table is being "reset" each day.

This may not be an issue but be aware just in case.  Of course, there are many ways to work around this as well depending on your needs.

All the best!
0

Featured Post

Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

  • 6
  • 5
  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now