Solved

PostgreSQL Database design for time based data

Posted on 2014-02-27
8
478 Views
Last Modified: 2014-03-14
Hi,

I am trying to work out the best database design for the following application.

In a factory we have about 500 machines/sensors that send back data every minute when in operation.

Each record received has
Date Time
Machine ID
Event Type - High / Low / Normal / Startup / Shutdown etc.
Various small data fields

For an average 8 hour day there would be 240,000 records and we keep records for many years.

All of the queries will have a data range as part of the search. We will be querying things like -
Records between Date1 and Date2 WHERE MachineID = X
Records between Date1 and Date2 WHERE EventType = Low
Records between Date1 and Date2 WHERE MachineID = X and EventType = 5
Latest record WHERE MachineID = X and EventType = 1

Questions
Should each machineID have its own table?
Should by primary key be a composite of DateTime MachineID and EventType? or should it be a 'surrogate' key?
What sort of index should I create?

Thanks
0
Comment
Question by:mhdi
8 Comments
 

Author Comment

by:mhdi
Comment Utility
Yes, I am intending to use Postgre. I selected the other topics as I figured the question on database design will most likely be similar across all SQL databases.
0
 
LVL 35

Assisted Solution

by:Terry Woods
Terry Woods earned 167 total points
Comment Utility
My experience with large databases is that performance is generally ok as long as the indexes are suitable for the query being run. If you had one table with all the machines, then for example you would want indexes on:
1. machine_id and event type and date (still works ok if you don't provide a date)
2. machine_id and date (caters for when you don't have an event type)

For any other fields that are regularly queried, you'd need further indexes.

All that said, I've worked with informix and oracle rather than postgre when it comes to large quantities of data. It would be worthwhile writing a script to generate the quantity of data (ie several years worth) you're going to need to handle and load it into the database to test performance before committing to a database design and application that may start to run into trouble later (if you don't test it in advance).

I personally would try to put it all in one table if postgre could handle it. It is time consuming to make up for de-normalised data.
0
 
LVL 26

Expert Comment

by:Zberteoc
Comment Utility
I would do use one table with the following indexes:

DateTime, MachineId, EventType
MachineId,EventType,DateTime
EventType ,DateTime,MachineId
0
IT, Stop Being Called Into Every Meeting

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

 
LVL 35

Expert Comment

by:Terry Woods
Comment Utility
@Zberteoc, could you please explain your reasoning for that choice of indexes? I don't understand why you've suggested those, and having extra columns in an index for a table containing an enormous quantity of data may have a performance cost.
0
 
LVL 26

Assisted Solution

by:Zberteoc
Zberteoc earned 166 total points
Comment Utility
You are right, my bad. It should only be:

DateTime, MachineId, EventType
MachineId,EventType
EventType

Just in case you have to search any of the columns only. It all depends really how you query the table. If you are sure you will never search on EventType only then you don't need that index. However, don't forget that for a composite index to be used you HAVE TO have the first column of the index in the search criteria or in join clauses.
0
 
LVL 26

Expert Comment

by:Zberteoc
Comment Utility
Sorry, I removed a comment meant for other question. :)
0
 
LVL 61

Accepted Solution

by:
gheist earned 167 total points
Comment Utility
Insert 30000 records / h = 500/min = 8 rows/s
It will work just great on any average machine.
If you want to keep 500 persistent connections consider pgpool instead of beefing up postgresql.


Indexes are for data retrieval. For collecting data you dont need them. They actually add some IOs (say ~5 IO/s on single insert + 3 per index)
e.g have 3 indices 8row/s = (5+9)*8 = 100 IO/s = 3600RPM for collecting data alone
That leads us in placing 1000+IO/s SSD storage in data collection path
0

Featured Post

Highfive Gives IT Their Time Back

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

Join & Write a Comment

Creating and Managing Databases with phpMyAdmin in cPanel.
Load balancing is the method of dividing the total amount of work performed by one computer between two or more computers. Its aim is to get more work done in the same amount of time, ensuring that all the users get served faster.
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
Via a live example combined with referencing Books Online, show some of the information that can be extracted from the Catalog Views in SQL Server.

771 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

14 Experts available now in Live!

Get 1:1 Help Now