Solved

maximum row size when converting from mysql MyISAM to innodb

Posted on 2014-01-09
4
2,231 Views
Last Modified: 2014-01-15
I am converting a database running on mysql 5.6 from MyISAM to innodb and it is failing due to this error

ERROR 1118 (42000) at line 41: Row size too large (> 8126). Changing some column
s to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help.
 In current row format, BLOB prefix of 768 bytes is stored inline.

I have 2 basic questions:



Am i correct in saying that the row size is the combined total of storage required for all columns in the table based on their data type? I believe text, varachar and blob are stored in overflow tables so their columns store pointers. All other 'primitive' data types store the actual data I think. However if this is the case then row size would be the same for all rows wouldn't it? What then is the avg_row_size returned from 'show table status?'

How can I find out the row size of a row in a MyISAM or InnoDB table? If I know how to do this I can play about deleting columns until the row size is small enough. Do i just have to add up the column sizes manually?

Thanks a lot
0
Comment
Question by:andieje
  • 2
4 Comments
 
LVL 82

Expert Comment

by:Dave Baldwin
Comment Utility
Here are the MySQL Data Type Storage Requirements: http://dev.mysql.com/doc/refman/5.6/en/storage-requirements.html

Maximum Row size is 65,535 bytes.  The size in the declaration also depends on the character set.  If it is a two-byte character set, then VARCHAR(255) is 255 characters times 2 bytes which = 510 bytes.

phpMyAdmin shows the "Row length" and "Row size" but I don't know where it's getting the info.
0
 
LVL 34

Accepted Solution

by:
gr8gonzo earned 500 total points
Comment Utility
varchar columns are not stored in overflow tables, but they -are- dynamically sized, so a varchar(255) is not going to take up 255 bytes in a row if that row only has the letter "A" in that field, for example. Varchar values can accept UP to that amount. You are correct in that the majority of the content of text and blob values go into separate tables.

You can define a table structure that can hold FAR more than the ~8k row size (a hundred varchar(200) fields could theoretically hold up to around 20,000 bytes), but any given record maxes out at 8k of actual data. If you try to store more than 8k of data in a record, you'll hit errors.

When a database is in a typical normalized state, you usually don't have that much data in a single record (8k is a pretty massive amount of non-text-block data for a single record, since you usually have either small values or REALLY LARGE values, but not usually many medium-sized ones).

Now, if you're getting that error during conversion, it's likely because one or more records can't fit into the new structure. It'll be up to you to figure out how you want to reorganize the table.

I'm guessing you have a ton of fields, so I'd also suggest examining your various data types to make sure that you're not using frequently using a data type that takes up more storage than what it needs. For example, I frequently see people using INT (4 bytes per column) to store either boolean flags or very small-sized flags that could easily fit into a TINYINT (1 byte per column).

A more "extreme" change is to store UNIX timestamps into INT fields instead of using the DATETIME data type (8 bytes), but this is usually dependent on the scenario.

Check out this page for more info:
http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html
0
 
LVL 34

Expert Comment

by:gr8gonzo
Comment Utility
To clarify something Dave said, that 64k max row size for InnoDB refers to internal storage structure and the maximum combined length for all field definitions (e.g. try to create a table with a thousand varchar(100) fields and it'll fail).

The defined data limit per record is still around 8000 bytes, about half of the default 16k page size for InnoDB.

http://dev.mysql.com/doc/refman/5.0/en/innodb-restrictions.html
0
 

Author Comment

by:andieje
Comment Utility
Hi gr8gronzo

The majority of the columns are text so if this text is stored in overflow tables and the data size exceeds 8k will the record size be too big? Does this 8k limit include all the data in the overflow tables or is it just the data in the 'record?' Does a text column just store 2 bytes in the actual record and the rest of the data in overflow tables?
0

Featured Post

Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

Join & Write a Comment

Suggested Solutions

A lot of articles have been written on splitting mysqldump and grabbing the required tables. A long while back, when Shlomi (http://code.openark.org/blog/mysql/on-restoring-a-single-table-from-mysqldump) had suggested a “sed” way, I actually shell …
Creating and Managing Databases with phpMyAdmin in cPanel.
Illustrator's Shape Builder tool will let you combine shapes visually and interactively. This video shows the Mac version, but the tool works the same way in Windows. To follow along with this video, you can draw your own shapes or download the file…
Here's a very brief overview of the methods PRTG Network Monitor (https://www.paessler.com/prtg) offers for monitoring bandwidth, to help you decide which methods you´d like to investigate in more detail.  The methods are covered in more detail in o…

743 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now