Go Premium for a chance to win a PS4. Enter to Win

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2360
  • Last Modified:

maximum row size when converting from mysql MyISAM to innodb

I am converting a database running on mysql 5.6 from MyISAM to innodb and it is failing due to this error

ERROR 1118 (42000) at line 41: Row size too large (> 8126). Changing some column
s to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help.
 In current row format, BLOB prefix of 768 bytes is stored inline.

I have 2 basic questions:



Am i correct in saying that the row size is the combined total of storage required for all columns in the table based on their data type? I believe text, varachar and blob are stored in overflow tables so their columns store pointers. All other 'primitive' data types store the actual data I think. However if this is the case then row size would be the same for all rows wouldn't it? What then is the avg_row_size returned from 'show table status?'

How can I find out the row size of a row in a MyISAM or InnoDB table? If I know how to do this I can play about deleting columns until the row size is small enough. Do i just have to add up the column sizes manually?

Thanks a lot
0
andieje
Asked:
andieje
  • 2
1 Solution
 
Dave BaldwinFixer of ProblemsCommented:
Here are the MySQL Data Type Storage Requirements: http://dev.mysql.com/doc/refman/5.6/en/storage-requirements.html

Maximum Row size is 65,535 bytes.  The size in the declaration also depends on the character set.  If it is a two-byte character set, then VARCHAR(255) is 255 characters times 2 bytes which = 510 bytes.

phpMyAdmin shows the "Row length" and "Row size" but I don't know where it's getting the info.
0
 
gr8gonzoConsultantCommented:
varchar columns are not stored in overflow tables, but they -are- dynamically sized, so a varchar(255) is not going to take up 255 bytes in a row if that row only has the letter "A" in that field, for example. Varchar values can accept UP to that amount. You are correct in that the majority of the content of text and blob values go into separate tables.

You can define a table structure that can hold FAR more than the ~8k row size (a hundred varchar(200) fields could theoretically hold up to around 20,000 bytes), but any given record maxes out at 8k of actual data. If you try to store more than 8k of data in a record, you'll hit errors.

When a database is in a typical normalized state, you usually don't have that much data in a single record (8k is a pretty massive amount of non-text-block data for a single record, since you usually have either small values or REALLY LARGE values, but not usually many medium-sized ones).

Now, if you're getting that error during conversion, it's likely because one or more records can't fit into the new structure. It'll be up to you to figure out how you want to reorganize the table.

I'm guessing you have a ton of fields, so I'd also suggest examining your various data types to make sure that you're not using frequently using a data type that takes up more storage than what it needs. For example, I frequently see people using INT (4 bytes per column) to store either boolean flags or very small-sized flags that could easily fit into a TINYINT (1 byte per column).

A more "extreme" change is to store UNIX timestamps into INT fields instead of using the DATETIME data type (8 bytes), but this is usually dependent on the scenario.

Check out this page for more info:
http://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html
0
 
gr8gonzoConsultantCommented:
To clarify something Dave said, that 64k max row size for InnoDB refers to internal storage structure and the maximum combined length for all field definitions (e.g. try to create a table with a thousand varchar(100) fields and it'll fail).

The defined data limit per record is still around 8000 bytes, about half of the default 16k page size for InnoDB.

http://dev.mysql.com/doc/refman/5.0/en/innodb-restrictions.html
0
 
andiejeAuthor Commented:
Hi gr8gronzo

The majority of the columns are text so if this text is stored in overflow tables and the data size exceeds 8k will the record size be too big? Does this 8k limit include all the data in the overflow tables or is it just the data in the 'record?' Does a text column just store 2 bytes in the actual record and the rest of the data in overflow tables?
0

Featured Post

Fill in the form and get your FREE NFR key NOW!

Veeam is happy to provide a FREE NFR server license to certified engineers, trainers, and bloggers.  It allows for the non‑production use of Veeam Agent for Microsoft Windows. This license is valid for five workstations and two servers.

  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now