Maximum size of a table object has been exceeded SQL0659N

I got this error yesterday returned from a batch that was inserting data in a table of a DB2 V8.2 database (running on Linux) :

 SQL0659N : Maximum size of a table object has been exceeded

The table is huge (145 M of lines - 75 GB). I found a workaround : I removed of of its indexes to free some space, but I will need to recreate it.

Can this limit be overrided ? If not what are solutions (other than those proposed by IBM in their thread about this issue)?

Thanks for your help.

Fred
LVL 1
fhoAsked:
Who is Participating?
 
Kent OlsenConnect With a Mentor Data Warehouse Architect / DBACommented:

I'm not intimately familiar with DB2 file/table limits on linux so I'm speculating here.

The message is pretty clear.  The table can't is as large as DB2 will/can make it.

When the command aborts, how big is the tablespace?  There may be a clue here.

You might try partitioning the table.  You might also try to normalize the data a bit and reducing the storage requirements.


Kent
0
 
fhoAuthor Commented:
I found another interesting information in the db2diag.log :
Just after the Load commande, there is this error :

"The maximum pool page number has been reached"
0
 
Kent OlsenData Warehouse Architect / DBACommented:

Two things.  :)

Increase the number of pages allocated for the tablespace where this table resides.  This thing is so huge that you might consider creating another tablespace just for it.

By all means, put the index(ex) for this table in another tablespace!  This is generally a good practice.  In this specific case it could reap huge performance benefits.


Good Luck,
Kent
0
 
fhoAuthor Commented:
Hi Kent,
thanks for these pieces of information.

The Tablespace is managed by SMS. So, there is no need to increase any number of pages, is it ? On the other hand, I had tried what you suggest : create another tablespace just for this table. It didn't change anything.

When this problem is solved, I'll create indexes in another tablespace. What do you also think about partitionning the table? would it allow to work around this issue. And what would be the impact on performances?
Thanks.
Fred


0
 
fhoAuthor Commented:
Hi Kent,

thanks for these pieces of information. A solution to this issue would really be to use partitionning, but unfortunately, the version of DB2 I'm using doesn't allow it.

I worked around it by creating a tablespace with larger pages. The limit of 64GB concerns the tablespace on not the table itself.
With 4K pages, the limit is 64 GB, with 8K pages it's 128GB .... And with max pages size, 32K, you reach the limit of a tablespace's size : 512GB.
I created another tablespace with 16 pages, and reloaded data in the new table created in this tablespace.
I've no longer the error.
FYI, the limit is not bound to Linux but to DB2.
Thanks again for your help.
Fred
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.