?
Solved

PostgreSQL Database design for time based data

Posted on 2014-02-27
8
Medium Priority
?
536 Views
Last Modified: 2014-03-14
Hi,

I am trying to work out the best database design for the following application.

In a factory we have about 500 machines/sensors that send back data every minute when in operation.

Each record received has
Date Time
Machine ID
Event Type - High / Low / Normal / Startup / Shutdown etc.
Various small data fields

For an average 8 hour day there would be 240,000 records and we keep records for many years.

All of the queries will have a data range as part of the search. We will be querying things like -
Records between Date1 and Date2 WHERE MachineID = X
Records between Date1 and Date2 WHERE EventType = Low
Records between Date1 and Date2 WHERE MachineID = X and EventType = 5
Latest record WHERE MachineID = X and EventType = 1

Questions
Should each machineID have its own table?
Should by primary key be a composite of DateTime MachineID and EventType? or should it be a 'surrogate' key?
What sort of index should I create?

Thanks
0
Comment
Question by:mhdi
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
8 Comments
 

Author Comment

by:mhdi
ID: 39893830
Yes, I am intending to use Postgre. I selected the other topics as I figured the question on database design will most likely be similar across all SQL databases.
0
 
LVL 35

Assisted Solution

by:Terry Woods
Terry Woods earned 668 total points
ID: 39893841
My experience with large databases is that performance is generally ok as long as the indexes are suitable for the query being run. If you had one table with all the machines, then for example you would want indexes on:
1. machine_id and event type and date (still works ok if you don't provide a date)
2. machine_id and date (caters for when you don't have an event type)

For any other fields that are regularly queried, you'd need further indexes.

All that said, I've worked with informix and oracle rather than postgre when it comes to large quantities of data. It would be worthwhile writing a script to generate the quantity of data (ie several years worth) you're going to need to handle and load it into the database to test performance before committing to a database design and application that may start to run into trouble later (if you don't test it in advance).

I personally would try to put it all in one table if postgre could handle it. It is time consuming to make up for de-normalised data.
0
 
LVL 27

Expert Comment

by:Zberteoc
ID: 39894946
I would do use one table with the following indexes:

DateTime, MachineId, EventType
MachineId,EventType,DateTime
EventType ,DateTime,MachineId
0
VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

 
LVL 35

Expert Comment

by:Terry Woods
ID: 39895719
@Zberteoc, could you please explain your reasoning for that choice of indexes? I don't understand why you've suggested those, and having extra columns in an index for a table containing an enormous quantity of data may have a performance cost.
0
 
LVL 27

Assisted Solution

by:Zberteoc
Zberteoc earned 664 total points
ID: 39896043
You are right, my bad. It should only be:

DateTime, MachineId, EventType
MachineId,EventType
EventType

Just in case you have to search any of the columns only. It all depends really how you query the table. If you are sure you will never search on EventType only then you don't need that index. However, don't forget that for a composite index to be used you HAVE TO have the first column of the index in the search criteria or in join clauses.
0
 
LVL 27

Expert Comment

by:Zberteoc
ID: 39896071
Sorry, I removed a comment meant for other question. :)
0
 
LVL 62

Accepted Solution

by:
gheist earned 668 total points
ID: 39897844
Insert 30000 records / h = 500/min = 8 rows/s
It will work just great on any average machine.
If you want to keep 500 persistent connections consider pgpool instead of beefing up postgresql.


Indexes are for data retrieval. For collecting data you dont need them. They actually add some IOs (say ~5 IO/s on single insert + 3 per index)
e.g have 3 indices 8row/s = (5+9)*8 = 100 IO/s = 3600RPM for collecting data alone
That leads us in placing 1000+IO/s SSD storage in data collection path
0

Featured Post

Application Discovery Service in AWS

In the era of the cloud, customers migrating away from their existing on-premise infrastructure. This requires lots of planning, strategies, and effort to identify their existing resources and determine how best to migrate.  Datacenter migrations happen in four phases -

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In this article we will learn how to fix  “Cannot install SQL Server 2014 Service Pack 2: Unable to install windows installer msi file” error ?
In part one, we reviewed the prerequisites required for installing SQL Server vNext. In this part we will explore how to install Microsoft's SQL Server on Ubuntu 16.04.
Using examples as well as descriptions, and references to Books Online, show the documentation available for date manipulation functions and by using a select few of these functions, show how date based data can be manipulated with these functions.
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.

765 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question