• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1782
  • Last Modified:

ERROR: cache lookup failed for relation #######

What would cause an error such as "ERROR:  cache lookup failed for relation #######"?  This happens from time to time (a lot lately) and occurs when trying to insert data from a CSV file.  I can then later go back and try importing the same CSV file data and it will be succesful.  So there is nothing wrong with the file or the tables.  This is really frustrating because we have had a large number of insert problems.

I am using Perl PgPP.pm to connect to the Postgres 7.4 database.
0
mock5c
Asked:
mock5c
  • 4
  • 3
  • 2
  • +1
1 Solution
 
gheistCommented:
Can I have full version of database and some idea of underlying OS?
Cause most likely is power failure or unclean process shutdown.
REINDEX or VACUUM usually helps.
0
 
mock5cAuthor Commented:
OS: debian 4.0
Postgres version 7.4.19

I have a cron job that vacuum analyzes every night and it does a vacuum full once a week.

I have manually reindexed select tables numerous times.
0
 
gheistCommented:
It looks like some integrity problem with (system) data e.g table deleted and index still present, or because of power failure.

pg_dumpall walks around my mind...

if you can recall power failures or oom over the life of system then you should spend some effort to make PostgeeSQL always sync data and filesystem to do full jounaling (or get into better power environment with more memory)
0
Prepare for your VMware VCP6-DCV exam.

Josh Coen and Jason Langer have prepared the latest edition of VCP study guide. Both authors have been working in the IT field for more than a decade, and both hold VMware certifications. This 163-page guide covers all 10 of the exam blueprint sections.

 
earth man2Commented:
If you do a lot of inserts, sometimes it is best to set AUTOCOMMIT ON and drop index es on that table and recreate them after you've finished.

If you need to worry about transactional integrity then insert into an identical table then copy the data from that table into the correct table.
0
 
mock5cAuthor Commented:
For the CSV file imports, there are anywhere from 27 - 4000 inserts per CSV file.  So yes, there are a lot of inserts.  I am using transactions so I can rollback in case a bad line of data in the CSV file is encountered (out of range, illegal value, etc.).  I think the idea of inserting into an identical table and then copying would be a solution to that.  Or I could have the program check for file integrity before doing any inserts and then do inserts without transactions.
0
 
gheistCommented:
This is not the cause of referential integrity problem. In theory bad data should be rejected on attempt to insert and your mechanism inserting by batches is balance between locating input error and optimally filling DB.

This might be bug in previous version etc. There is no elegant way to fix system tables damaged this way.
0
 
gheistCommented:
Both gave tips on issue... A liitle feedback after
0
 
earth man2Commented:
He say
"I think the idea of inserting into an identical table and then copying would be a solution to that"
If he no say where he is at with problem how can we assist further?

The author is using an old version of PostgreSQL and maybe upgrading to the latest version will solve this issue.
0
 
mock5cAuthor Commented:
I'm apologize for abandoning this message.  I haven't checked back in awhile.  It seems that the fix for my problem was to change the cron job that was installed along with Postgres.  It was running a vacuum analyze every 3 or 4 hours and it's a very large database.  I think that was locking the tables.  Ever since I've changed this cron job to run only once a day in the middle of the night, I have only seen this error a few times.
0
 
Computer101Commented:
PAQed with points refunded (250)

Computer101
EE Admin
0

Featured Post

Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

  • 4
  • 3
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now