PostgreSQL Database design for time based data

Hi,

I am trying to work out the best database design for the following application.

In a factory we have about 500 machines/sensors that send back data every minute when in operation.

Each record received has
Date Time
Machine ID
Event Type - High / Low / Normal / Startup / Shutdown etc.
Various small data fields

For an average 8 hour day there would be 240,000 records and we keep records for many years.

All of the queries will have a data range as part of the search. We will be querying things like -
Records between Date1 and Date2 WHERE MachineID = X
Records between Date1 and Date2 WHERE EventType = Low
Records between Date1 and Date2 WHERE MachineID = X and EventType = 5
Latest record WHERE MachineID = X and EventType = 1

Questions
Should each machineID have its own table?
Should by primary key be a composite of DateTime MachineID and EventType? or should it be a 'surrogate' key?
What sort of index should I create?

Thanks
mhdiAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

mhdiAuthor Commented:
Yes, I am intending to use Postgre. I selected the other topics as I figured the question on database design will most likely be similar across all SQL databases.
0
Terry WoodsIT GuruCommented:
My experience with large databases is that performance is generally ok as long as the indexes are suitable for the query being run. If you had one table with all the machines, then for example you would want indexes on:
1. machine_id and event type and date (still works ok if you don't provide a date)
2. machine_id and date (caters for when you don't have an event type)

For any other fields that are regularly queried, you'd need further indexes.

All that said, I've worked with informix and oracle rather than postgre when it comes to large quantities of data. It would be worthwhile writing a script to generate the quantity of data (ie several years worth) you're going to need to handle and load it into the database to test performance before committing to a database design and application that may start to run into trouble later (if you don't test it in advance).

I personally would try to put it all in one table if postgre could handle it. It is time consuming to make up for de-normalised data.
0
ZberteocCommented:
I would do use one table with the following indexes:

DateTime, MachineId, EventType
MachineId,EventType,DateTime
EventType ,DateTime,MachineId
0
Protecting & Securing Your Critical Data

Considering 93 percent of companies file for bankruptcy within 12 months of a disaster that blocked access to their data for 10 days or more, planning for the worst is just smart business. Learn how Acronis Backup integrates security at every stage

Terry WoodsIT GuruCommented:
@Zberteoc, could you please explain your reasoning for that choice of indexes? I don't understand why you've suggested those, and having extra columns in an index for a table containing an enormous quantity of data may have a performance cost.
0
ZberteocCommented:
You are right, my bad. It should only be:

DateTime, MachineId, EventType
MachineId,EventType
EventType

Just in case you have to search any of the columns only. It all depends really how you query the table. If you are sure you will never search on EventType only then you don't need that index. However, don't forget that for a composite index to be used you HAVE TO have the first column of the index in the search criteria or in join clauses.
0
ZberteocCommented:
Sorry, I removed a comment meant for other question. :)
0
gheistCommented:
Insert 30000 records / h = 500/min = 8 rows/s
It will work just great on any average machine.
If you want to keep 500 persistent connections consider pgpool instead of beefing up postgresql.


Indexes are for data retrieval. For collecting data you dont need them. They actually add some IOs (say ~5 IO/s on single insert + 3 per index)
e.g have 3 indices 8row/s = (5+9)*8 = 100 IO/s = 3600RPM for collecting data alone
That leads us in placing 1000+IO/s SSD storage in data collection path
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Query Syntax

From novice to tech pro — start learning today.