[Last Call] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 557
  • Last Modified:

PostgreSQL Database design for time based data

Hi,

I am trying to work out the best database design for the following application.

In a factory we have about 500 machines/sensors that send back data every minute when in operation.

Each record received has
Date Time
Machine ID
Event Type - High / Low / Normal / Startup / Shutdown etc.
Various small data fields

For an average 8 hour day there would be 240,000 records and we keep records for many years.

All of the queries will have a data range as part of the search. We will be querying things like -
Records between Date1 and Date2 WHERE MachineID = X
Records between Date1 and Date2 WHERE EventType = Low
Records between Date1 and Date2 WHERE MachineID = X and EventType = 5
Latest record WHERE MachineID = X and EventType = 1

Questions
Should each machineID have its own table?
Should by primary key be a composite of DateTime MachineID and EventType? or should it be a 'surrogate' key?
What sort of index should I create?

Thanks
0
mhdi
Asked:
mhdi
3 Solutions
 
mhdiAuthor Commented:
Yes, I am intending to use Postgre. I selected the other topics as I figured the question on database design will most likely be similar across all SQL databases.
0
 
Terry WoodsIT GuruCommented:
My experience with large databases is that performance is generally ok as long as the indexes are suitable for the query being run. If you had one table with all the machines, then for example you would want indexes on:
1. machine_id and event type and date (still works ok if you don't provide a date)
2. machine_id and date (caters for when you don't have an event type)

For any other fields that are regularly queried, you'd need further indexes.

All that said, I've worked with informix and oracle rather than postgre when it comes to large quantities of data. It would be worthwhile writing a script to generate the quantity of data (ie several years worth) you're going to need to handle and load it into the database to test performance before committing to a database design and application that may start to run into trouble later (if you don't test it in advance).

I personally would try to put it all in one table if postgre could handle it. It is time consuming to make up for de-normalised data.
0
 
ZberteocCommented:
I would do use one table with the following indexes:

DateTime, MachineId, EventType
MachineId,EventType,DateTime
EventType ,DateTime,MachineId
0
NEW Veeam Backup for Microsoft Office 365 1.5

With Office 365, it’s your data and your responsibility to protect it. NEW Veeam Backup for Microsoft Office 365 eliminates the risk of losing access to your Office 365 data.

 
Terry WoodsIT GuruCommented:
@Zberteoc, could you please explain your reasoning for that choice of indexes? I don't understand why you've suggested those, and having extra columns in an index for a table containing an enormous quantity of data may have a performance cost.
0
 
ZberteocCommented:
You are right, my bad. It should only be:

DateTime, MachineId, EventType
MachineId,EventType
EventType

Just in case you have to search any of the columns only. It all depends really how you query the table. If you are sure you will never search on EventType only then you don't need that index. However, don't forget that for a composite index to be used you HAVE TO have the first column of the index in the search criteria or in join clauses.
0
 
ZberteocCommented:
Sorry, I removed a comment meant for other question. :)
0
 
gheistCommented:
Insert 30000 records / h = 500/min = 8 rows/s
It will work just great on any average machine.
If you want to keep 500 persistent connections consider pgpool instead of beefing up postgresql.


Indexes are for data retrieval. For collecting data you dont need them. They actually add some IOs (say ~5 IO/s on single insert + 3 per index)
e.g have 3 indices 8row/s = (5+9)*8 = 100 IO/s = 3600RPM for collecting data alone
That leads us in placing 1000+IO/s SSD storage in data collection path
0

Featured Post

Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now