SQL Error - #511

Posted on 2007-11-21
Last Modified: 2013-12-18
I am getting an error while executing a query in Sybase. What I have noticed is there is a convert of char data type is happening to varchar. The size defined os 4000. So when I bring down the size from 4000 to 1000, the error vanishes and executes the query successfully.

Since I am new to Sybase and also Database, I wonder what happens when we use convert(4000), is there any temprory table getting used. Even if this is the scenario 4000 bytes is a very small number then why do we have this error comming up.

Finally when I convert a char to varchar, as a best practice, what and how do I know what is the size to be given?

Thanks a Lot
Question by:gops1
  • 4
  • 3
LVL 17

Author Comment

ID: 20326916
The error says this:
Number (511) Severity (16) State (2) Server (MDRQA) Attempt to update or insert row failed because resultant row of size 2556
LVL 17

Author Comment

ID: 20327153
To add some more to the question, here is the SQL that I am using:

d.TBL_ID,f.SCHM_NM, f.SCHM_ID,convert(varchar(4000), d.TBL_CMNT_TX)
AND a.DB_ID = 7592

When I reduce 4000 to 1000 it works fine.

Expert Comment

ID: 20327821
This is what I got from the manual,(Query Processor Errors)

Error 511

Message text

Version 12.5 and Later
Attempt to update or insert row failed because resultant row of size %d bytes is larger than the maximum size (%d bytes) allowed for this table.

Version 12.0.x and Earlier
Updated or inserted row is bigger than maximum  size (%d bytes) allowed for this table.

This error occurs when you try to insert or update a row that is longer than the maximum allowable length. On a server with 2K page size, rows consist of 1962 characters; allow 2 characters of row overhead for APL tables, for a maximum usable row size of 1960. For DOL tables, subtract two characters per varchar column in determining usable row size.

Error 511 is caused by database design errors (for example, a table designed with the potential for rows containing more than the maximum allowable characters).

The following warning is given when you create a table that has the potential for rows exceeding the maximum row size (that is, the maximum length of all columns added up is greater than the allowable number of characters):

Msg 1708, Level 16, State 1:Warning: Row size could exceed row size limit,
which is %d bytes.

If Error 511 is being caused by a table containing rows with more than the maximum row size, divide the table into two or more tables so that no row length is more than the allowable number of characters.

If the 511 error occurring on your database does not appear to be caused by the above situation, call Sybase Technical Support.

Versions in which this error is raised
All versions
Simplifying Server Workload Migrations

This use case outlines the migration challenges that organizations face and how the Acronis AnyData Engine supports physical-to-physical (P2P), physical-to-virtual (P2V), virtual to physical (V2P), and cross-virtual (V2V) migration scenarios to address these challenges.

LVL 19

Expert Comment

ID: 20330185
vikram4o is correct that rows cannot exceed the page size.  There are, however, a couple of better ways to skin this cat than splitting it into multiple tables, what with all the heartache that portends.

Depending on which version of ASE you are using, you can create the server with a larger page size.  I believe it was with 12.5 or 12.5.1 that the option to build the server with 2, 4, 8, or 16K pages was introduced.  This is a server-wide option and must be setup at the time the server is built.  It cannot be changed after the fact.  The good news is that it will allow you to have individual VARCHAR fields almost 16KB in length.  As a matter of course, I routinely install ASE with an 8KB page size.  Given the I/O sizes of disk controllers and the read-ahead functionality of the drives and the database, you never really "read" less than 8KB at a time off of a disk anyway.  It also reduces the number of physical I/Os when logging.

The other way to get around the problem is to store the long character as a CLOB (Character Large OBject).  LOBs (either Character or Binary) are stored in their own page chain and, of course, are not subject to the page size limit.  Manipulation of LOBs is a bit more limiting in T-SQL than CHAR or VARCHAR types put it is the approved and "correct" way to handle large strings in the database.  It also does not simply raise the limit a little; at 2GB, it blows it out of the way.

The earliest version of ASE I recommend anyone run is 12.5.1 and preferably 12.5.4, the terminus release of 12.5.x  If you have to upgrade to get there, you might as well go all the way to 15.0.2 as you will avoid the impending end of life of 12.5.x, get better performance, some cool new features, and not have to upgrade again for a while.  As far as compatibility goes, I have recently ported a very complex application with 600 stored procedures, Java objects and functions, LOBs, and even an interface that makes use of the bulk API and have had no compatibility problems.  The only difference I found with 15 versus 12.5.x is that you used to be able to count on GROUP BY clauses to do an implied ORDER BY of the result set.  That is no longer true.

LVL 19

Expert Comment

ID: 20330245
One other thing I noticed.  You posted this in the Oracle 10.x area as well.  I gather you have some experience with Oracle as well.  FYI, using LONG VARCHAR fields in Oracle has a history of problems.  There is/was an intermittent bug in version 7, 8, and 9 (don't know about 10) across all platforms.  The problem is that on a table with LONG VARCHAR fields that is heavily updated, some cross-linking of the contents of the VARCHAR fields would occur.  I am guessing that under the right circumstances, you could get page splitting that eventually caused one or more pointers to be corrupt.

Because it was rare and you generally had to beat on the database for hours to get it to happen, it never got any attention from Oracle.  I believe that in 10 they got rid of the data type or changed the implementation under the covers.

Best of luck,
LVL 17

Author Comment

ID: 20332390
Thanks a lot Grant to pass on some great Gyan (Knowledge) to me. I also thank Vikram to respond to this quickly.

Please bear with me if I am asking silly. What I would like to know is when I write a query how do I know what is the page size or is it possible that when I convert a char to varchar, the size should be always less than the page size. Some thing like this:

LVL 19

Accepted Solution

grant300 earned 500 total points
ID: 20335860
First, I generally avoid using the CHAR data type except for very short fields, e.g. SEX (M/F), FLAG (Y/N), STATE (AK, FL, NY, OH, etc.).  Using CHAR for longer fields leads to a log of wasted space in the database, particularly if you are worrying about fields as large as a database page.

So, step 1 is to change your tables so that any fields of type CHAR with a length larger than 32 are VARCHAR.  Then you can stop doing conversions that you really don't need to do anyway and you will save a bunch of space in the database.

Step 2 is to look at the real lengths of your various VARCHAR data and figure out what the true, largest realistic row size you are ever going to see is.

Step 3 is to rebuild your server with a page size large enough to handle the biggest row you can ever see plus a margin of error.

Note: If the largest realistic row size you will ever see approaches 16KB, you have a fundamental design problem and you need to look at one of the other options for handling large strings or many large strings on a record.

One final issue is that if you are joining tables, the result set also forms a "row".  The some of the lengths of the fields in the result set must also be less than the page size.  So if you have CHAR(1000) fields in each of three tables and you join them in a result set that contains all three columns, you fail because the result set has a records length somewhat above 3000 bytes but your page size is 2K bytes.

Best of luck,
LVL 17

Author Closing Comment

ID: 31410341
Thanks a lot Grant for giving me a good knowledge on this issue. You deserve more than "A"

Featured Post

Ransomware-A Revenue Bonanza for Service Providers

Ransomware – malware that gets on your customers’ computers, encrypts their data, and extorts a hefty ransom for the decryption keys – is a surging new threat.  The purpose of this eBook is to educate the reader about ransomware attacks.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Slowly Changing Dimension Transformation component in data task flow is very useful for us to manage and control how data changes in SSIS.
Ever needed a SQL 2008 Database replicated/mirrored/log shipped on another server but you can't take the downtime inflicted by initial snapshot or disconnect while T-logs are restored or mirror applied? You can use SQL Server Initialize from Backup…
Via a live example, show how to setup several different housekeeping processes for a SQL Server.
Using examples as well as descriptions, and references to Books Online, show the documentation available for datatypes, explain the available data types and show how data can be passed into and out of variables.

816 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

10 Experts available now in Live!

Get 1:1 Help Now