Solved

maximum row size when converting from mysql MyISAM to innodb

Posted on 2014-01-09
4
2,282 Views
Last Modified: 2014-01-15
I am converting a database running on mysql 5.6 from MyISAM to innodb and it is failing due to this error

ERROR 1118 (42000) at line 41: Row size too large (> 8126). Changing some column
s to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help.
 In current row format, BLOB prefix of 768 bytes is stored inline.

I have 2 basic questions:



Am i correct in saying that the row size is the combined total of storage required for all columns in the table based on their data type? I believe text, varachar and blob are stored in overflow tables so their columns store pointers. All other 'primitive' data types store the actual data I think. However if this is the case then row size would be the same for all rows wouldn't it? What then is the avg_row_size returned from 'show table status?'

How can I find out the row size of a row in a MyISAM or InnoDB table? If I know how to do this I can play about deleting columns until the row size is small enough. Do i just have to add up the column sizes manually?

Thanks a lot
0
Comment
Question by:andieje
  • 2
4 Comments
 
LVL 83

Expert Comment

by:Dave Baldwin
ID: 39770170
Here are the MySQL Data Type Storage Requirements: http://dev.mysql.com/doc/refman/5.6/en/storage-requirements.html

Maximum Row size is 65,535 bytes.  The size in the declaration also depends on the character set.  If it is a two-byte character set, then VARCHAR(255) is 255 characters times 2 bytes which = 510 bytes.

phpMyAdmin shows the "Row length" and "Row size" but I don't know where it's getting the info.
0
 
LVL 34

Accepted Solution

by:
gr8gonzo earned 500 total points
ID: 39770176
varchar columns are not stored in overflow tables, but they -are- dynamically sized, so a varchar(255) is not going to take up 255 bytes in a row if that row only has the letter "A" in that field, for example. Varchar values can accept UP to that amount. You are correct in that the majority of the content of text and blob values go into separate tables.

You can define a table structure that can hold FAR more than the ~8k row size (a hundred varchar(200) fields could theoretically hold up to around 20,000 bytes), but any given record maxes out at 8k of actual data. If you try to store more than 8k of data in a record, you'll hit errors.

When a database is in a typical normalized state, you usually don't have that much data in a single record (8k is a pretty massive amount of non-text-block data for a single record, since you usually have either small values or REALLY LARGE values, but not usually many medium-sized ones).

Now, if you're getting that error during conversion, it's likely because one or more records can't fit into the new structure. It'll be up to you to figure out how you want to reorganize the table.

I'm guessing you have a ton of fields, so I'd also suggest examining your various data types to make sure that you're not using frequently using a data type that takes up more storage than what it needs. For example, I frequently see people using INT (4 bytes per column) to store either boolean flags or very small-sized flags that could easily fit into a TINYINT (1 byte per column).

A more "extreme" change is to store UNIX timestamps into INT fields instead of using the DATETIME data type (8 bytes), but this is usually dependent on the scenario.

Check out this page for more info:
http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html
0
 
LVL 34

Expert Comment

by:gr8gonzo
ID: 39770183
To clarify something Dave said, that 64k max row size for InnoDB refers to internal storage structure and the maximum combined length for all field definitions (e.g. try to create a table with a thousand varchar(100) fields and it'll fail).

The defined data limit per record is still around 8000 bytes, about half of the default 16k page size for InnoDB.

http://dev.mysql.com/doc/refman/5.0/en/innodb-restrictions.html
0
 

Author Comment

by:andieje
ID: 39771588
Hi gr8gronzo

The majority of the columns are text so if this text is stored in overflow tables and the data size exceeds 8k will the record size be too big? Does this 8k limit include all the data in the overflow tables or is it just the data in the 'record?' Does a text column just store 2 bytes in the actual record and the rest of the data in overflow tables?
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
category table 2 32
MySQL Error Code 2 25
Using coldfusion <cfexecute> to restore mysql database 7 51
MySql Recovery 2 29
As a database administrator, you may need to audit your table(s) to determine whether the data types are optimal for your real-world data needs.  This Article is intended to be a resource for such a task. Preface The other day, I was involved …
Load balancing is the method of dividing the total amount of work performed by one computer between two or more computers. Its aim is to get more work done in the same amount of time, ensuring that all the users get served faster.
With Secure Portal Encryption, the recipient is sent a link to their email address directing them to the email laundry delivery page. From there, the recipient will be required to enter a user name and password to enter the page. Once the recipient …

820 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question