PsqlODBC / MSAccess : Out of Memory on backend

Hi all !

I have a MSAccess 2007 frontend with linked tables on a PostgreSQL 8.3.7 backend.
The tables are linked via ODBC with the PsqlODBC 8.03.0400 driver.
I use the drivers default options.

I am trying to run a query that inserts ~350.000 rows from a local Access table to a linked PostgreSQL table. Both tables have the same structure, so the statement is basically something like :
INSERT INTO mylinkedtable (field1, field2, etc...) SELECT field1, field2, etc... FROM myaccesstable;

This query runs for quite a long time (10 minutes) and fails before the end with an ODBC failure.
On the postgresql server (backend), I have an "out of memory" error for this specific query.

On the postgresql server (running on Windows 2003), I have 1GB of RAM and ~3GB of pagefile.
The process running the query eats up all available RAM before swapping.

The table is quite simple, no trigger or rule, only a primary key and 3 indexes (btree).
My server settings are :
shared_buffers = 64MB
sort_buffers = 8MB
work_mem = 8MB
maintenance_work_mem = 128MB

work_mem and maintenance_work_mem were higher (32MB and 512MB respectively), so I tried to lower them but to no avail.

What should I change for this query to work ? Why does postgresql use so much memory (more than 2GB !!) when all my ressource settings are set so low ?
How can I make sure that this won't happen again ?

Thanks a lot for your help !
Who is Participating?
thewildConnect With a Mentor Author Commented:
OK, Hiroshi Inoue (PsqlODBC's developper) gave my the answer on the psql-odbc mailing-list.
The "level of rollback" was set to "statement" by default on the ODBC connection.
Setting it to "transaction" fixed the problem.

Memory usage now tops at ~85MB, which is what I was awaiting from my settings.
Can you post exact error messages - like SQL codes, and something from PostgreSQL log?
Is your Windows 2003 a 32bit or a 64bit edition? Any service packs?
-"- Access 2007 -"-? -"- ?
Is your PostgreSQL an EnterpriseDB build or build?
Client OS - XP or Vista? Service pack?
Is your pagefile size recomended size by Windows system properties?

You should increase wal_buffer to 8MB and checkpoint_segments to 15 to do massive updates.
Also old work_mem was a lot better for your purpose.
thewildAuthor Commented:
Hello gheist, thanks for your answer !

Exact error message is localized (in French), but here you go (remove table and field names) :
2009-05-04 12:51:05 CEST ERREUR:  mémoire épuisée
2009-05-04 12:51:05 CEST DETAIL:  échec d'une requête de taille 560.
2009-05-04 12:51:05 CEST INSTRUCTION :  INSERT INTO  "<myschema>"."<mytable>"  (<field list>) VALUES (<value list>)

The translation for this in English is "Failed on request of size 560".

The error is an insert of a single row, I think it is ~250.000th row in the table. I know that psqlODBC inserts rows one at a time, so that is coherent.

Server OS is Windows 2003 R2 Standard Edition with SP2 (latest hotfixes applied), 32 bits edition.
Pagefile is between 1536 MB and 3072 MB (I think those are the recommended settings)

MS Access 2007 is... well, just Access 2007 ! 32bits, no service pack.
Client is Windows XP Pro SP3 32 bits. Do these parameters matter, since the problem seems to lie on the backend or in the odbc driver (just wondering)

PostgreSQL is standard 8.3.7 binary, installer used is pgInstaller, not "one click".

I already have checkpoint_segments = 12, but the wal_buffers is default, so I'll try with a 1MB setting first (default is 64kB !)

You say that old work_meme was better, and I agree in a performance point of view, but here I have a query going OOM, so I was trying to lower its memory consumption by all means.
Free Tool: SSL Checker

Scans your site and returns information about your SSL implementation and certificate. Helpful for debugging and validating your SSL configuration.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Looks like memory leak. I wonder if Access is able to split inserts into 200000 row batches.
Since you got error in PostgreSQL log - leave fixing access itself out.
Increase work_mem just for today - access locks without reason all rows of all the inserted set.
And after insertion close access to release session and its locks and set work_mem back.

Your error is from PostgreSQL internal allocator - nothing to do with system memory - just that work_mem is 560bytes short (locks took it all)
locks also make use of shared_buffers - rise them also.
thewildAuthor Commented:
I now have work_mem to 16MB and shared_buffers to 128MB.
No help, still a huge memory usage that crashes whenever the process gets above 2.5GB.

"Your error is from PostgreSQL internal allocator - nothing to do with system memory - just that work_mem is 560bytes short (locks took it all)"
Well, it seems to me that this does have something to do with system memory, because the postgres.exe process tries to allocate more memory that can be allocated by a single process on a Win32 OS !
What I really don't understand is how memory usage could get that high !
OK, there are loks implied, but the inserted data itself is only about 80MB (when exported to CSV for instance), and no one else is accessing the table ! I don't understand how such an insert could eat more than 3GB of memory.

I know very little of postgresql's internals, but I understand that this has something to do with transactions.
The point is that I don't want a postgres.exe process to use such an amount of memory, EVER. That's why I keep my memory settings low ! I prefer bad performance than query crashes.
Access locks all the rows it inserted. This consumes memory.
Can you insert e.g by 100000 rows and comit transaction and exit access and redo procedure?

Alternate way:
Use e.g SquirrelSQL and JDBC-ODBC - extract access table into text file and then load into PostgreSQL. Access is really resource hog.
thewildAuthor Commented:
I have written a VBA macro that inserts row 10.000 at a time, but that's just a temporary workaround, not a real fix.

I'd like to make sure that postgresql never crashes that way. That is just not acceptable, and it looks like a bug to me.
Why would locking 300.000 rows use 3GB of memory anyway ?
Access crashed databases 10 years ago and has not changed over time. You have to use professional tools if you need adequate database access.
thewildAuthor Commented:
The problem is not in Access, its either in psqlODBC or in the Backend.
Access is indeed a crappy DBMS and I would never use it as a backend, but it is one of the best frontends I have used. But that's really not the point here.

Well, let's get back to our problem :
I have a communication log from psqlODBC !
All I have in there are insert statements (one per line), encapsuled in SAVEPOINT xxx / RELEASE xxx statements.

It looks like that :

[39.370]conn=06D43E20, query='INSERT INTO  "<myschema>"."<mytable>"  (<my field list>) VALUES (<my values list>)'
[39.373]conn=06D43E20, query='SAVEPOINT _EXEC_SVP_06D48FC8'
[39.374]conn=06D43E20, query='RELEASE _EXEC_SVP_06D48FC8'
[39.375]conn=06D43E20, query='INSERT INTO  "<myschema>"."<mytable>"  (<my field list>) VALUES (<my values list>)'
[39.378]conn=06D43E20, query='SAVEPOINT _EXEC_SVP_06D48FC8'
[39.379]conn=06D43E20, query='RELEASE _EXEC_SVP_06D48FC8'
[39.380]conn=06D43E20, query='INSERT INTO  "<myschema>"."<mytable>"  (<my field list>) VALUES (<my values list>)'
[39.383]conn=06D43E20, query='SAVEPOINT _EXEC_SVP_06D48FC8'
[39.384]conn=06D43E20, query='RELEASE _EXEC_SVP_06D48FC8'

As you can see, all that happens here are a bunch of inserts within a single transaction. No explicit locking.

I don't undestand why SAVEPOINT / RELEASE do not embrace the INSERT statements though. What's the point in having them between nothing ?
Or maybe the logger does not log queries really as they arrive. This could be it, because the log starts with two INSERT statements without SAVEPOINT / RELEASE, and then the sequence begins "normally".
You have to reconnect datasource time after time. There is aparent mempory leak somewhere(as with all data connections). No easy way around.
thewildAuthor Commented:
Memory is released after COMMIT or ROLLBACK, no need to close the connection. The transaction must not be too long, that's all.
So for the moment my  "1000 rows at a time" batch processing works (around), but the bug is still there.

My guess is that this is a 8.3.7 bug. I'm doing to install 8.3.6 locally to see how it behaves.
Install oneclick - it is compiled with different compiler - might be optimization issue.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.