We help IT Professionals succeed at work.

PostgreSQL disk fragmentation causes performance problems on Windows

dyrset asked
I am using PostgreSQL to log data in my application. A number of rows are added periodically, but there are no updates or deletes. There are several applications that log to different databases.

This causes terrible disk fragmentation which again causes performance degradation when retrieving data from the databases. The table files are getting more than 50000 fragments over time (max size abount 1 GB).

The problem seems to be that PostgreSQL grows the database with only the room it need for the new data each time it is added. Because several applications are adding data to different databases, the additions are never contiguous.

I think that preallocating lumps of a given size, say 4 MB, would remove this problem. The max number of fragments on a 1 GB file would then be  250, which is no problem. Is this possible to configure in PostgreSQL? If not, how difficult is it to implement in the database?
Watch Question

someone else has this problem.
might help to create a separate physical partition for this table on the disk.  Then only writes to that disk might make it more sequential, this will also simplify defragmenting
it as only postgresql will be interested in it.


Thank you for your answer. I see that is a possibility.
However, the number of logging applications can sometimes be quite high (up to 100 or more). Then it is impractical to create a physical partition for each of the databases.

Does this mean that it is not possible to preallocate file space for tables in PostgreSQL?

If performance is that important I would not recommend using windows as the platform of choice.  Hardware is so cheap it makes sense to have a separate linux server to handle DB.

To pre-allocate space on linux create a big enough file, loop mount it as a device, and
create your tablespace in there.  But what to do exactly when the file is full is complicated.

You could come up with a solution that uses pci ssd drives for current writes along with a partition system that is platter based for historical data.

If you want to use windows see http://www.howtogeek.com/howto/5291/how-to-create-a-virtual-hard-drive-in-windows-7/

You can then create  a postgresql tablespace that points to the VHD (virtual hard drive) and create the table with a TABLESPACE clause.
Top Expert 2015

And to rectify what you have now:
vacuum full analyze verbose
and use contig from sysinternals to defragment compacted files.


When the disk fills up, I assume that fragmentation will be a problem even for other file systems than NTFS (Windows).
Adding an extra computer is also not desirable even if the hardware is cheap, as this will add extra costs to the maintenance of the system.

Using SSD drives is clearly a possibility, though an expensive one.

So to sum up this topic then; it is NOT possible to preallocate chunks of data for PostgreSQL tables.

This is a simple solution that would solve this issue, and would be of great use to others as well. How hard would it be to implement this in PostgreSQL?
If you have Windows 7 you can
1 create a virtual hard drive (VHD) command is diskmgmt.msc
   Computer -> Manage -> Action -> Create VHD.
2 create  a postgresql tablespace that points to the VHD (virtual hard drive)
3 create the table with a TABLESPACE clause.

I guess you can create a VHD for every table if you need to.