Go Premium for a chance to win a PS4. Enter to Win


Would PostgresSQL support a 30-millions-rows table?

Posted on 2004-08-06
Medium Priority
Last Modified: 2012-08-14
I have a question: Is PostgreSQL able to support a table with 30 millions of rows? We have a .NET web application connecting to Oracle DB, but due Oracle licenses are very expensive, our IT chief is thinking to use a free SQL database. However, we use a table (just for reading) that has 30 millions of rows. Then you have to add indexes. Is there any technical article about if PostgresSQL will work or not?
Question by:Petronilo

Accepted Solution

rjkimble earned 100 total points
ID: 11739778
PostgreSQL should work fine with such a table, so long as you have the right indexes on it. I have run it against a table with 3.5 million rows with no problem. However, whenever you update the table, you will have to pay particular attention to vacuuming it and reindexing its indexes.
LVL 22

Assisted Solution

by:earth man2
earth man2 earned 100 total points
ID: 11770605
Behold read the postgresql FAQ


I plaigiarise it below -- all credit to the Postgresql documentation team.

4.5) What is the maximum size for a row, a table, and a database?
These are the limits:
    Maximum size for a database?             unlimited (32 TB databases exist)
    Maximum size for a table?                32 TB
    Maximum size for a row?                  1.6TB
    Maximum size for a field?                1 GB
    Maximum number of rows in a table?       unlimited
    Maximum number of columns in a table?    250-1600 depending on column types
    Maximum number of indexes on a table?    unlimited
Of course, these are not actually unlimited, but limited to available disk space and memory/swap space. Performance may suffer when these values get unusually large.
The maximum table size of 32 TB does not require large file support from the operating system. Large tables are stored as multiple 1 GB files so file system size limits are not important.
The maximum table size and maximum number of columns can be quadrupled by increasing the default block size to 32k.

bigserial should be used if you anticipate the use of more than 2^31 identifiers over the lifetime of the table.

The  proof of the pudding is in the eating.  Using Oracle gives you many nice features such as partitioning but Postgres may perform just as well depending on the complexity and numbers of concurrent users of your solution.


Featured Post

Vote for the Most Valuable Expert

It’s time to recognize experts that go above and beyond with helpful solutions and engagement on site. Choose from the top experts in the Hall of Fame or on the right rail of your favorite topic page. Look for the blue “Nominate” button on their profile to vote.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Best database to use for Maps is PostgreSQL. This is an open source database. Comes as a package with most Linux OS. For more info visit the following site: http://www.postgresql.org/ (http://www.postgresql.org/) This requires some add-o…
By, Vadim Tkachenko. In this article we’ll look at ClickHouse on its one year anniversary.
Steps to create a PostgreSQL RDS instance in the Amazon cloud. We will cover some of the default settings and show how to connect to the instance once it is up and running.
this video summaries big data hadoop online training demo (http://onlineitguru.com/big-data-hadoop-online-training-placement.html) , and covers basics in big data hadoop .

916 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question