Would PostgresSQL support a 30-millions-rows table?

I have a question: Is PostgreSQL able to support a table with 30 millions of rows? We have a .NET web application connecting to Oracle DB, but due Oracle licenses are very expensive, our IT chief is thinking to use a free SQL database. However, we use a table (just for reading) that has 30 millions of rows. Then you have to add indexes. Is there any technical article about if PostgresSQL will work or not?
Thanks!
PetroniloAsked:
Who is Participating?
 
rjkimbleConnect With a Mentor Commented:
PostgreSQL should work fine with such a table, so long as you have the right indexes on it. I have run it against a table with 3.5 million rows with no problem. However, whenever you update the table, you will have to pay particular attention to vacuuming it and reindexing its indexes.
0
 
earth man2Connect With a Mentor Commented:
Behold read the postgresql FAQ

http://www.postgresql.org/docs/faqs/FAQ.html

I plaigiarise it below -- all credit to the Postgresql documentation team.

4.5) What is the maximum size for a row, a table, and a database?
These are the limits:
    Maximum size for a database?             unlimited (32 TB databases exist)
    Maximum size for a table?                32 TB
    Maximum size for a row?                  1.6TB
    Maximum size for a field?                1 GB
    Maximum number of rows in a table?       unlimited
    Maximum number of columns in a table?    250-1600 depending on column types
    Maximum number of indexes on a table?    unlimited
Of course, these are not actually unlimited, but limited to available disk space and memory/swap space. Performance may suffer when these values get unusually large.
The maximum table size of 32 TB does not require large file support from the operating system. Large tables are stored as multiple 1 GB files so file system size limits are not important.
The maximum table size and maximum number of columns can be quadrupled by increasing the default block size to 32k.

AND NOTE
bigserial should be used if you anticipate the use of more than 2^31 identifiers over the lifetime of the table.

The  proof of the pudding is in the eating.  Using Oracle gives you many nice features such as partitioning but Postgres may perform just as well depending on the complexity and numbers of concurrent users of your solution.


0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.