Want to win a PS4? Go Premium and enter to win our High-Tech Treats giveaway. Enter to Win

x
?
Solved

Native Error code: 414  - key size of 765 for a work table. This exceeds the maximum allowable limit of 600.

Posted on 2004-11-01
5
Medium Priority
?
3,128 Views
Last Modified: 2012-05-05
I have a stored procedure querying a Sybase database. The SQL is :
SELECT      tblLDAP.UserAutoId, tblLDAP.FirstName, tblLDAP.LastName
FROM      (tblContacts INNER JOIN tblLDAP       ON tblContacts.UserId = tblLDAP.UserAutoId)
INNER JOIN LDAP_CAD_Buildings       ON tblLDAP.Building = LDAP_CAD_Buildings.Building
WHERE LDAP_CAD_Buildings.Arena_id = @strRegion
GROUP BY      tblLDAP.UserAutoId, tblLDAP.FirstName, tblLDAP.LastName
ORDER BY      tblLDAP.LastName

Just recently it is generating the following error:
Sybase ADO Provider error '80004005'
[Native Error code: 414] [DataDirect ADO Sybase Provider] The current query would generate a key size of 765 for a work table. This exceeds the maximum allowable limit of 600.

The query works in a blink in both SQL Server and Access, but not in Sybase. All that has changed recently is that there has been an increase in records in the table tblLDAP from 32,000 records to 72,000. This would not have affected the output of the recordset which is dependant on tblContacts. tblContacts holds 534 records and LDAP_CAD_Buildings has 256 records.

The SQL produces a recordset of only 60 records
If I remove the aggregation the procedure will work.

What is causing this problem in Sybase and how do I correct it?

0
Comment
Question by:alitor99
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 2
5 Comments
 
LVL 24

Expert Comment

by:Joe Woodhouse
ID: 12468949
Ok, the problem here is that, as the error suggests, the query needs to generate a worktable to hold some intermediate results. In your case this seems to be for the GROUP BY clause. Sometimes the optimiser creates temporary indexes on worktables.

Sybase has a limit on how large an index can be in bytes. The error is telling you that the optimiser needs to create an index that is too wide and exceeds this limit.

What's interesting is that you say this has previously worked and has only stopped behaving recently. This says to me that there is a different query plan being generated and something in the new way of how the optimiser wants to do things is breaking the key size limit.

It is certainly possible that some changes in row sizes could affect the query plan in terms of join orders etc. I wouldn't think it would affect the GROUP BY, so I admit I'm scratching my head a bit there.

Can we see some more information? It would help us to see the exact schema/structure/definition of the tables involved. There's probably no point asking you to generate a "SHOWPLAN" output (which documents the query plan chosen) since it's dying before it finishes optimising.

If you have the option, try temporarily deleting rows to return to your old rowcounts and see if it works there?
0
 
LVL 1

Author Comment

by:alitor99
ID: 12470976
Joe,
Your comments got me thinking and I was able to run SHOWPLAN  by using
set showplan on
set noexec on
and then adding the text of the stored procedure. This gave me the following results:



QUERY PLAN FOR STATEMENT 1 (at line 1).




    STEP 1
        The type of query is SET OPTION ON.




QUERY PLAN FOR STATEMENT 2 (at line 2).




    STEP 1
        The type of query is SET OPTION ON.




QUERY PLAN FOR STATEMENT 3 (at line 3).




    STEP 1
        The type of query is SELECT (into Worktable1).
        GROUP BY
        Evaluate Grouped COUNT AGGREGATE.


        FROM TABLE
            tblContacts
        Nested iteration.
        Index : idxUserId
        Forward scan.
        Positioning at index start.
        Index contains all needed columns. Base table will not be read.
        Using I/O Size 16 Kbytes for index leaf pages.
        With LRU Buffer Replacement Strategy for index leaf pages.


        FROM TABLE
            tblLDAP
        Nested iteration.
        Index : idxLdap
        Forward scan.
        Positioning by key.
        Keys are:
            UserAutoId  ASC
        Using I/O Size 2 Kbytes for index leaf pages.
        With LRU Buffer Replacement Strategy for index leaf pages.
        Using I/O Size 2 Kbytes for data pages.
        With LRU Buffer Replacement Strategy for data pages.


        FROM TABLE
            LDAP_CAD_Buildings
        Nested iteration.
        Table Scan.
        Forward scan.
        Positioning at start of table.
        Using I/O Size 2 Kbytes for data pages.
        With LRU Buffer Replacement Strategy for data pages.
        TO TABLE
            Worktable1.


    STEP 2
        The type of query is INSERT.
        The update mode is direct.
        Worktable2 created, in allpages locking mode, for ORDER BY.


        FROM TABLE
            Worktable1.
        Nested iteration.
        Table Scan.
        Forward scan.
        Positioning at start of table.
        Using I/O Size 16 Kbytes for data pages.
        With MRU Buffer Replacement Strategy for data pages.
        TO TABLE
            Worktable2.


    STEP 3
        The type of query is SELECT.
        This step involves sorting.


        FROM TABLE
            Worktable2.
        Using GETSORTED
        Table Scan.
        Forward scan.
        Positioning at start of table.
        Using I/O Size 16 Kbytes for data pages.
        With MRU Buffer Replacement Strategy for data pages.


Does this help?
0
 
LVL 24

Accepted Solution

by:
Joe Woodhouse earned 600 total points
ID: 12471520
On its own, sadly no, that doesn't help much, as it doesn't tell us which worktable is unhappy about a key size. On the other hand if you were able to replicate your earlier experience (with smaller rowcounts) and get a SHOWPLAN from that, it could be helpful to compare the differences.

Did any indexes change recently?

I notice the showplan saying it used a covered non-clustered index. This might be a large index containing other columns not needed for this query, and the limit might be too large.

Oh, I just thought of another reason for why this might have changed - if these columns are varchars, it might be that their maximum width exceeds the 600 byte limitation, but you were never affected by this earlier because none of your columns were filling up the varchars. Can we get some more information, the results of:

sp_help tblLDAP
go
select max(datalength(UserAutoId)),
          max(datalength(FirstName)),
          max(datalength(LastName))
from   tblLDAP
go

(It will table scan, sorry.)

It could be that one or more of the new rows causes the GROUP BY expression to be too large.

Lastly, try removing the GROUP BY altogether and replacing it with a DISTINCT:

SELECT     DISTINCT tblLDAP.UserAutoId, tblLDAP.FirstName, tblLDAP.LastName
FROM     (tblContacts INNER JOIN tblLDAP      ON tblContacts.UserId = tblLDAP.UserAutoId)
INNER JOIN LDAP_CAD_Buildings      ON tblLDAP.Building = LDAP_CAD_Buildings.Building
WHERE LDAP_CAD_Buildings.Arena_id = @strRegion
ORDER BY     tblLDAP.LastName

It doesn't seem you're using the GROUP BY for any aggregate functions, so all it will be doing is removing duplicates. Speaking of which, a column name like "UserAutoId" sounds like it would probably be unique - are you sure any duplicates are possible in this query? If not, then neither a GROUP BY nor a DISTINCT are needed here...

Good luck!
0
 
LVL 1

Author Comment

by:alitor99
ID: 12473579
Joe,
Replacing GROUP BY with DISTINCT did the trick, albeit rather slowly. There were duplicates created through the tblContacts. This was perfectly correct becuase they detailed Contacts for various functions and one individual could take on many roles.

Looking through your comments, I think the error arose when I started updating the tblLDAP through SQL Server. Fields were being created as varchar (255). Thus the 3 fields aggregate to 765! Now, through reverse engineering, I understand the error message - after the problem is fixed.

Thanks very much for your help
0
 
LVL 24

Expert Comment

by:Joe Woodhouse
ID: 12477451
You're welcome, glad I could help.
0

Featured Post

Hire Technology Freelancers with Gigs

Work with freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely, and get projects done right.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

This month, Experts Exchange sat down with resident SQL expert, Jim Horn, for an in-depth look into the makings of a successful career in SQL.
Article by: evilrix
Looking for a way to avoid searching through large data sets for data that doesn't exist? A Bloom Filter might be what you need. This data structure is a probabilistic filter that allows you to avoid unnecessary searches when you know the data defin…
In this video, Percona Solution Engineer Rick Golba discuss how (and why) you implement high availability in a database environment. To discuss how Percona Consulting can help with your design and architecture needs for your database and infrastr…
How to fix incompatible JVM issue while installing Eclipse While installing Eclipse in windows, got one error like above and unable to proceed with the installation. This video describes how to successfully install Eclipse. How to solve incompa…
Suggested Courses

610 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question