Link to home
Start Free TrialLog in
Avatar of supercell29
supercell29

asked on

I would like to restore a database in MySQL from the MySQL (same version) backup file.

I would like to restore a database I recently created in MySQL. What is the command to use to successfully restore a backed up database? (rt3.sql.gz).

Thank you in advance....

Avatar of MMDeveloper
MMDeveloper
Flag of United States of America image

gunzip rt3.sql.gz
mysql -b DatabaseName < rt3.sql

if you need to provide login credentials

mysql -u root -p -b DatabaseName < rt3.sql
Avatar of supercell29
supercell29

ASKER

I get the following error:

ERROR 1064 (42000) at line 4124: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when du' at line 1

I am attempting to migrate from MySQL 5.0.27 to 5.0.45. This is a big database!
Follow up:

Created a test_db in 5.045. Dumped it.

mysql -u root -p -b test_db < test_db.sql and it worked fine (same MySQL version)

When dumping 5.0.27 db (production db) and performing same command: mysql -u root -p -b csrt3 < csrt3.sql on 5.0.45 I get the error mentioned above:

ERROR 1064 (42000) at line 4124: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when du' at line 1
My /etc/my.cnf file:

# Example MySQL config file for large systems.
#
# This is for a large system with memory = 512M where the system runs mainly
# MySQL.
#
# You can copy this file to
# /etc/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options (in this
# installation this directory is /var/db/mysql) or
# ~/.my.cnf to set user-specific options.
#
# In this file, you can use all long options that a program supports.
# If you want to know which options a program supports, run the program
# with the "--help" option.

# The following options will be passed to all MySQL clients
[client]
password        =  (my pwd)
port            = 3306
socket          = /tmp/mysql.sock

# Here follows entries for some specific programs

# The MySQL server
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-locking
key_buffer = 256M
max_allowed_packet = 32M
table_cache = 256
sort_buffer_size = 25M
read_buffer_size = 25M
read_rnd_buffer_size = 25M
myisam_sort_buffer_size = 128M
thread_cache_size = 25M
query_cache_size= 25M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 4
set-variable = max_connections=2500
datadir = /usr/db/mysql

# Don't listen on a TCP/IP port at all. This can be a security enhancement,
# if all processes that need to connect to mysqld run on the same host.
# All interaction with mysqld must be made via Unix sockets or named pipes.
# Note that using this option without enabling named pipes on Windows
# (via the "enable-named-pipe" option) will render mysqld useless!
#
#skip-networking

# Replication Master Server (default)
# binary logging is required for replication
#log-bin=mysql-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id       = 1                                                                                                    


# Replication Slave (comment out master section to use this)
#
# To configure this host as a replication slave, you can choose between
# two methods :
#
# 1) Use the CHANGE MASTER TO command (fully described in our manual) -
#    the syntax is:
#
#    CHANGE MASTER TO MASTER_HOST=, MASTER_PORT=,
#    MASTER_USER=, MASTER_PASSWORD= ;
#
#    where you replace , ,  by quoted strings and
#     by the master's port number (3306 by default).
#
#    Example:
#
#    CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306,
#    MASTER_USER='joe', MASTER_PASSWORD='secret';
#
# OR
#
# 2) Set the variables below. However, in case you choose this method, then
#    start replication for the first time (even unsuccessfully, for example
#    if you mistyped the password in master-password and the slave fails to
#    connect), the slave will create a master.info file, and any later
#    change in this file to the variables' values below will be ignored and
#    overridden by the content of the master.info file, unless you shutdown
#    the slave server, delete master.info and restart the slaver server.
#    For that reason, you may want to leave the lines below untouched
#    (commented) and instead use CHANGE MASTER TO (see above)
#
# required unique id between 2 and 2^32 - 1
# (and different from the master)
# defaults to 2 if master-host is set
# but will not function as a slave if omitted
#server-id       = 2
#
# The replication master for this slave - required
#master-host     =  
#
# The username the slave will use for authentication when connecting
# to the master - required
#master-user     =  
#
# The password the slave will authenticate with when connecting to
# the master - required
#master-password =  
#
# The port the master is listening on.
# optional - defaults to 3306
#master-port     =  
#
# binary logging - not required for slaves, but recommended
#log-bin=mysql-bin
                                                                                                 

# Point the following paths to different dedicated disks
#tmpdir         = /tmp/
#log-update     = /path-to-dedicated-directory/hostname

# Uncomment the following if you are using BDB tables
#bdb_cache_size = 64M
#bdb_max_lock = 100000

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /var/db/mysql/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /var/db/mysql/
#innodb_log_arch_dir = /var/db/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 256M
#innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 64M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50

[mysqldump]
quick
max_allowed_packet = 32M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

[isamchk]
key_buffer = 128M
sort_buffer_size = 128M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 128M
sort_buffer_size = 128M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout                                              
Actual file I am trying to import: -rw-r--r--  1 root      wheel   3659372562 Jul 16 09:20 csrt3.sql
In case you have access to both servers, and the version of both servers is almost the same, it might just work to move over the database files and set permittions again.

In FreeBSD your databases are most likely stored in a folder in /var/db/mysql

Just tar this folder:
# cd /var/db/mysql && tar -cvzf backup.tar.gz foldername

Then move it to the new server, and place it under /var/db/mysql, then untar

# cd /var/db/mysql && tar -xvzf backup.tar.gz foldername

Make sure you delete the import attempt database first before doing this. Then set permittions (assuming you user and group is mysql):

# chown -R mysql:mysql /var/db/mysql/foldername

Again, this might not work, but it is worth trying.

You could also increase the value of max_allowed_packet in /etc/my.cnf, but I think it has to go under [mysqld] instead of [mysqldump] or even both, but I am not sure about these MySQL settings.
In case you wonder what "foldername" stands for: It should represent the name of the database, which is a folder in /var/db/mysql
You could also consider mysql management software like Navicat which can transfer complete databases from one server to another. Although the biggest I have ever transfered is a few gigabytes, it did the job very well. I do think that you also need to increase max_allowed_packet to make this work.
I have my db in a different location and have my my.cnf pointed to it and it works just fine. Thank you for the info.

I am checking out the PHPMyAdmin utility now to see what this is capable of. It looks as though there is A) Somewhere, something some buffer needs to be increased (not sure where. I have altered the mysqldump number in my.cnf:

[mysqldump]
quick
max_allowed_packet = 32M

and still nothing, or B) This may have something to do with how I am exporting from MySQL 5.0.27, and/or something to do with how I am importing into 5.0.45. I am assuming there is something slightly different between the two. I will also look into Navicat. Thank you!!
I just looked up the max_allowed_packet on mysql.com:

http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html

It says you should add the option in the [mysqld] section instead of [mysqldump]. This should be done on the target server. If you get this error when using mysqldump, add the option also on the source server, but remember to put it in the [mysqld] section of /etc/my.cnf

Again, if you move over the database folder and files, it should work perfectly without you having to change any settings.

If you want to transfer it using mysqldump and then mysql import, or by using phpmyadmin or navicat, you should increase the max_allowed_packet because I think they all use this limit.
Also something to note: just adding the setting to my.cnf is not enough. You also need to restart mysql server to make the new settings active.
I did this. Still get error. Thank you :~)
Two options:
1) Your sql file contains a single SQL query to insert all data
2) You need to increase the value of max_allowed_packets untill the error message dissapears

I personally think you have the first situation.

I quote mysql.com:

"A communication packet is a single SQL statement sent to the MySQL server, a single row that is sent to the client, or a binary log event sent from a master replication server to a slave. The largest possible packet that can be transmitted to or from a MySQL 5.0 server or client is 1GB."

Try the mysqldump this way:

# mysqldump --opt --skip-extended-insert --skip-quick db_name > dump.sql
"Greater than" needs to be "less than" mysqldump --opt --skip-extended-insert --skip-quick db_name < dump.sql not mysqldump --opt --skip-extended-insert --skip-quick db_name > dump.sql. I will try this. Thanks again!
You need to run this command on the source server to create a dump that uses a single query per row, instead of extended-insert query.

Therefore this command need > to dump to that file.

# mysqldump --opt --skip-extended-insert --skip-quick db_name > dump.sql

Transfer that dump.sql file to the target server, and run this:

# mysql db_name < dump.sql

If you have a user/pass you need to add "-u username -p" to the commands like this:

# mysqldump -u username -p --opt --skip-extended-insert --skip-quick db_name > dump.sql (source server)
# mysql -u username -p db_name < dump.sql (target server)

Extended Insert is used by default on newer MySQL version when you use mysqldump. I quote mysql.com on the definition of --extended-insert: "Use multiple-row INSERT syntax that include several VALUES lists. This results in a smaller dump file and speeds up inserts when the file is reloaded. " Although this is perfect in most cases, sometimes it can lead to enourmous-sized queries.
This certainly looks like we are heading in the right direction:

engineering2# mysqldump -u root -p --opt --skip-extended-insert --skip-quick csrt3 < csrt3.sql

Enter password:
-- MySQL dump 10.11
--
-- Host: localhost    Database: csrt3
-- ------------------------------------------------------
-- Server version       5.0.45

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;

/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;

-- Dump completed on 2008-07-18 16:06:49                                          

PROBLEM:

 Database name (I created separately) shows nothing. No tables....nothing.
Again, you should use > instead of < when using mysqldump on the SOURCE server

mysqldump -u root -p --opt --skip-extended-insert --skip-quick csrt3 > csrt3.sql

This way the output will be redirected to the file csrt3.sql

You are probably running this command on the TARGET server, instead of the SOURCE server. Watch SOURCE and TARGET, because you want to transfer from SOURCE to TARGET and are mixing this up.
This is a stand alone.
I am dumping from an old 5.0.27 to our NAS. From there I am taking the csrt3.sql.tgz (dumped file) and copying it to my stand alone. From there I am gunzipping the file and attempting to import to the new 5.0.45 MySQL.
Ah, got it. Your initial statement was for DUMPING the database. Yes, I was seeing this as part of the import process. My bad.
Let me summarize.

SOURCE = old 5.0.27
TARGET = new 5.0.45

This procedure assumes you have mounted the NAS on both SOURCE and TARGET server on location /nas

On SOURCE:

# mysqldump -u root -p --opt --skip-extended-insert --skip-quick csrt3 > csrt3.sql
# tar -cvzf csrt3.sql.tar.gz csrt3.sql
# cp csrt3.sql.tar.gz /nas/csrt3.sql.tar.gz

On TARGET:

Remove any existing tables in database csrt3 before proceeding

# cp /nas/csrt3.sql.tar.gz csrt3.sql.tar.gz
# tar -xvzf csrt3.sql.tar.gz
# mysql -u root -p csrt3 < csrt3.sql
I will try this when command to dump the SOURCE database this weekend (will kill all connections if I do it during production time). I will let you know......
Sure, good luck! I am pretty much sure this will solve the problem.
SOURCE = old 5.0.27
TARGET = new 5.0.45

This procedure assumes you have mounted the NAS on both SOURCE and TARGET server on location /nas: YES

On SOURCE: (1 GB of RAM)

# mysqldump -u root -p --opt --skip-extended-insert --skip-quick csrt3 > /filer/eng2_db_backup/engineering_original/csrt3.sql
Enter password:
mysqldump: Out of memory (Needed 1298076 bytes)
mysqldump: Got error: 2008: MySQL client ran out of memory when retrieving data from server      
SOURCE: FROM WEBMIN USING "BACKUP DATABASES":

Webmin "BACKUP DATABASES" creates the following:

mysqldump -u root --quote-names csrt3 2>&1 | gzip -c >/filer/eng2_db_backup/engineering_original/MYSQLDUMP_TEST/csrt3.sql.gz 2>&1

RESULT:

Successfully backed up 1223 bytes from database information_schema to file /filer/eng2_db_backup/engineering_original/MYSQLDUMP_TEST/information_schema.sql.

Successfully backed up 3659376511 bytes from database csrt3 to file /filer/eng2_db_backup/engineering_original/MYSQLDUMP_TEST/csrt3.sql.

Successfully backed up 292233 bytes from database mysql to file /filer/eng2_db_backup/engineering_original/MYSQLDUMP_TEST/mysql.sql.

I will try and import this to TARGET on Monday.
SOURCE:

Here is the actual Webmin settings I used to create the aforementioned RESULT.
FOR-EX-ECH---MYSQLDUMP---WEBMIN-.jpg
SOURCE (WEBMIN):

This (Webmin) worked for all three compression settings: .sql (well, no compression too), .tgz, and .bzip. The aforementioned graphic shows the .gz, but the results are for the non-compression backup (.sql).
Now to see if the import will work just as well....(fingers crossed)
I din't think this would work. Since MySQL 5.0 the mysqldump function has changed a bit. It adds the --opt option by default, and therefore a lot of options are enabled by default. --opt is a group option that includes several other options, like extended-insert.

The goal of extended-insert is to decrease the size of the dumped SQL file, and make it faster to import. Therefore it will write the insert queries in a specific manner:

INSERT INTO table_name VALUES (), (), ();

This means that it will create a single query for all entries in a table. Therefore the query gets soo big in your case that it won't import without a hasle.

When using --skip-extended-insert on the SOURCE server, you got an error because there was not enough memory available on that server to create the backup sql file the traditional way. This would produce a line for each row in the tabe:

INSERT INTO table_name VALUES ();
INSERT INTO table_name VALUES ();

You can see that this approach will not create massive sql queries, but the size of the backup sql file will be a lot bigger. Therefore the SOURCE server needs much more memory to do the dump, which you don't have.

I quote Mysql.com (http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html):

"Some mysqldump options are shorthand for groups of other options. --opt and --compact fall into this category. For example, use of --opt is the same as specifying --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset. Note that all of the options that --opt stands for also are on by default because --opt is on by default."

"--extended-insert, -e
Use multiple-row INSERT syntax that include several VALUES lists. This results in a smaller dump file and speeds up inserts when the file is reloaded"

I asume webmin is also using mysqldump to create its sql file. Therefore changes are big you were able to create the backup sql. But I am afraid that it will contain --extended-insert queries, which you don't get imported easily on the TARGET server.

---

Did you try the firts solution I offered you? It has ID 22034015.

In case the backup sql file created by webmin does not get imported, you will have 2 solutions:

- Add extra memory to the SOURCE server, and make sure the TARGET server has at least as much memory as the SOURCE server. This way you would be able to create a traditional backup sql file, with the --skip-extended-insert option.

- Use the solution offered in post with ID 22034015. This will allow you to just copy the MySQL database folder and files, without creating a dumped sql file, to the TARGET server. It is the most easy solution for handling big database, especially when versions of both servers are soo close together. It will be available in webmin mysql module right after you unpakked the database on the TARGET server and have set permissions.
Interesting discovery: The SOURCE database is symbolically linked from /var/db/mysql to /vol0/mysqldb (vol0 is a RAID array with a ton of hard drive space). When I restore from my Webmin backups (backing up .sql's from /vol0/mysqldb) does this have the potential to mess things up? I have been restoring .sql files on the TARGET server that were backed up from the /vol/mysqldb NOT /var/db/mysql.

For ID 22034015: I literally just need to copy the csrt3 folder from the TARGET server to the SOURCE server (after untarred and chowned) and it may work? Will the symbolic link mess things up?
Yeah, skipped this one I think. I will try this tomorrow as well as the Webmin.

RAM: Yeah, I had a feeling. I only have 1 GB in each rack server. I can easily upgrade to 2 or 3 GB.

Thank you again! I will let you know the results! (Hopefully positive. :~)
TARGET:

Physical RAM test: Added 2 GB of RAM to TARGET rack server.

TARGET# mysql -u root -p csrt3 < csrt3.sql
Enter password:
ERROR 1114 (HY000) at line 2388: The table 'Attachments' is full

This is a different error. (As you mentioned earlier; I may need more RAM in SOURCE server).

I will now try ID 22034015.

THIS IS WHAT I DID ON TARGET:

#cp /filer (NAS)/folder where folder admin backs up from SOURCE /vol0 (var-db-mysql.tgz) dir (symbolically linked from /var/db/mysql, thus the var-db-mysql.tgz name) to TARGET /usr/db/mysql.

# In TARGET /usr/db/mysql: tar -xvzf var-db-mysql.tgz  (former admin has automated utility on SOURCE which backs up /var/db/mysql->/vol0/mysqld to NAS (/filer) server on a nightly basis. I cannot find what the hell he uses).

Then move it to the new server, and place it under /var/db/mysql, then untar (this was done in the aforementioned sentence). Within this vol0 (uncompressed var-db-mysql.tgz default folder name) are the following:

engineering2# ls -al
total 7916344
drwx------  4 mysql  mysql         512 Jul 21 15:01 .
drwxr-xr-x  3 root   mysql         512 Jul 21 15:01 ..
drwx------  2 mysql  mysql        1024 Jul 21 15:01 csrt3 (ONLY RISES TO 1024 AND THAT'S IT)
-rw-r-----  1 mysql  mysql    10876602 Jul 13 02:41 host.website.com.err
-rw-r-----  1 mysql  mysql     5242880 Jul 14 00:07 ib_logfile0
-rw-r-----  1 mysql  mysql     5242880 Jul 14 00:07 ib_logfile1
-rw-r-----  1 root   mysql 29269950464 Jul 21 15:14 ibdata1 (THIS ONE TOOK A VERY LONG TIME TO DECOMPRESS)
drwx------  2 mysql  mysql        1536 Jul 21 15:01 mysql              

As the file is being decompressed I notice only the ibdata1 number is rising. Why isn't the csrt3 folder going up as well?  The csrt3.sql from Webmin's backup shows csrt3 as: -rw-r--r--  1 root  wheel  3659376511 Jul 19 12:29 csrt3.sql (chown -R mysql:mysql for csrt3.sql prior to # mysql -u root -p csrt3 < csrt3.sql)

Make sure you delete the import attempt database first before doing this (I DID). Then set permissions (assuming you user and group is mysql) (YES, IT IS: DONE)

FROM IMPORTED/DECOMPRESSED var-db-mysql.tgz (TARGET): /usr/db/mysql/vol0/mysqldb on TARGET:

#cp -r csrt3 /usr/db/mysql
#chown -R mysql:mysql csrt3

# IN /usr/db/mysql/csrt3 on TARGET:


ACL.frm                         CustomFields.frm                ObjectCustomFields.frm          Scrips.frm                      db.opt
Attachments.frm                 GroupMembers.frm                Principals.frm                  Templates.frm                   sessions.MYD
Attributes.frm                  Groups.frm                      Queues.frm                      Tickets.frm                     sessions.MYI
CachedGroupMembers.frm          Links.frm                       ScripActions.frm                Transactions.frm                sessions.frm
CustomFieldValues.frm           ObjectCustomFieldValues.frm     ScripConditions.frm             Users.frm

Restart MySQL: I see csrt3 folder (db). When I double click on it I get the error seen in attachment.










FROM-var-db-mysql.tar.jpeg
UPGRADED RAM IN BOTH SOURCE AND TARGET:

SOURCE: hw.physmem: 2138505216

TARGET: hw.physmem: 3208056832


Will retry ID 22034015 tonight.
Is there a way to import one table at a time?
I was told to try this:

Shutdown Apache,

SOURCE: mysqldump --opt --skip-lock-tables --add-drop-table --single-transaction --default-character-set=binary csrt3 > csrt3.sql

I will tonight.
This is what I meant:

SOURCE:

Shutdown Apache,

mysqldump --skip-lock-tables --single-transaction --default-character-set=binary csrt3 > csrt3.sql


(SOURCE RAM UPGRADED: hw.physmem: 2138505216)
I have been very sick the last couple of days, unable to check EE. But I am back now and catching up.

I have never tried to move files and folders of InnoDB databases, only MyISAM. So This could not work as expected.

I have found a procedure on Mysql.com to move innodb databases and I hope this will help you:

http://dev.mysql.com/doc/refman/5.0/en/moving.html
http://dev.mysql.com/doc/refman/5.0/en/innodb-backup.html

You should follow the procedure to move over files and folders, and not using mysqldump to do this migration.
Nice to see you back in the game! Yeah, the suggestions from a former employee here (senior admin) gave me these results. I am utilizing the links that you provided today.

His suggestions and results:

mysql> FLUSH TABLES WITH READ LOCK;
Query OK, 0 rows affected (0.01 sec)

engineering# mysqldump --opt --skip-lock-tables --add-drop-table --single-transaction --default-character set=binary csrt3 > csrt3.sql
mysqldump: Character set 'set=binary' is not a compiled character set and is not specified in the '/usr/local/share/mysql/charsets/Index.xml' file

engineering# mysqldump --opt --skip-lock-tables --add-drop-table --single-transaction csrt3 > csrt3.sql
mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table `Attachments` at row: 65583

mysql> UNLOCK TABLES;
Query OK, 0 rows affected (0.00 sec)
Tonight I will be following the instructions from the links you sent. I have a question however: I noticed in the SOURCE server, within it's /etc/my.cnf ALL the InnoDB lines are commented out. Aren't these required in order to run an InnoDB database? The database is running just fine and has been for almost 2 years. I have all of them commented out on the TARGET server as well (default).

Also, in the MySQL instructions it mentions the following for recovery:
mysqlbinlog 
yourhostname-bin.123 | mysql 


Is .123 a literal term? I assume that it is seeing that yourhostname is the only word actually italicized.

Open in new window

Hello again. Since I am by no means a MySQL guru, I went around having to know all the commands and installed on SOURCE - phpmyadmin. Here is what I discovered right off the bat:



csrt3.jpg
I don't think you need those settings for innoDB enabled in my.cnf

They are most probably enabled by default with a default value, where you can override these settings by uncommenting the options in my.cnf

---

I don't know about the 123 in that command. I suppose it is just to identify the file and give it an extension. From the MySQL manual I don't see anything that says otherwise (http://dev.mysql.com/doc/refman/5.0/en/mysqlbinlog.html)

---

Is there a specific reason to store attachments in MySQL? Wouldn't it be better and more easy to migrate when only storing the filename or even the whole path, and store the attachments itself as a file on the server? This way you do not end up with such a large database.

If all this try-and-error does not help, you could also write a temporary php script on the SOURCE server, which stores the attachments as a file, and saves the path and filename to a new table in your database. Then you can remove the Attachments table, which will result in a very small database.

Then you could move all files manually and migrate the database easily because the size is reduced in size by like 99%

If you don't want to change your existing scripts that use the Attachments table, you could convert the temporary script to store the attachments again in your MySQL database.
Admin before me set the db up this way. If only I knew php. No clue. Probably very simple. I just never had a chance to learn it.

As far as getting this db off the TARGET before my boss fires me - Can't I just move a table at a time? This .bin method is not working:

TARGET:

To be able to recover your InnoDB database to       the present from the binary backup just described, you have to run       your MySQL server with binary logging turned on:

Enabling binary log in TARGET using the following command: mysql> SET SESSION SQL_LOG_BIN=1;


From there  I am following the procedures from the  links you sent:


           Shut down your MySQL server and make sure that it shuts down           without errors.        

           Copy all your data files (ibdata files           and .ibd files) into a safe place.         (copied to /filer (NAS)

           Copy all your ib_logfile files to a safe           place.         (copied to /filer (NAS)

           Copy your my.cnf configuration file or           files to a safe place.         (copied to /filer (NAS)

           Copy all the .frm files for your           InnoDB tables to a safe place.         (copied to /filer (NAS)


Then you can       apply the binary log to the backup database to achieve       point-in-time recovery:      mysqlbinlog yourhostname-bin.123 | mysql

 SO FROM TARGET: mysqlbinlog engineering2 (TARGET SERVER)-bin.123 | mysql -u root -p ???

engineering2# mysqlbinlog engineering2-bin.123 | mysql -u root
mysqlbinlog: File 'engineering2-bin.123' not found (Errcode: 2)
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)
engineering2# mysqlbinlog engineering2-bin.123 | mysql -u root -p
mysqlbinlog: File 'engineering2-bin.123' not found (Errcode: 2)
Enter password:                                                              

Where are the bin files stored? I am under the assumption that hostname means hostname of this TARGET server. Am I just missing the boat here?



I am sorry that I can't help you more with the innoDB moving. I never done this myself before, so this is out of my knowledge.

I am prepared however to create the PHP scripts for you, but then I need to be sure that you have PHP on each server, and need to know the version of PHP on both servers.

I also would need the database structure for the Attachment table to create the scripts. You can make a screenshot in phpMyAdmin, thats fine.

You will also need to move all other tables. You can do so by exporting them on the SOURCE server with phpMyAdmin, and importing them again on the TARGET server with phpMyAdmin. All tables except the Attachment table.

I will then create two scripts, one for export, and one for import, so you can run these scripts to do the migration. I will add instructions to the output of these scripts for you to follow.

---------------

You should tell your boss that he should not blame you, but the guy that setup this database. It is not good-practice to store attachments in MySQL, unless they are really small attachments. It is always easier and better to store them as files on a disk and just store the location in the database. The problems you are facing at the moment could have been avoided in the first place.
SOURCE PHP VERSION: 5 (not sure how to find exact version. Can't locate .tar)
TARGER PHP VERSION 5.2.6

I will export the tables from SOURCE, excluding Attachments, tonight.

Thank you very much!!



Attachment-table.jpg
To further inform you: This database on SOURCE (csrt3) was originally titled rt3 9changed to csrt3 by former admin) and is AUTOMATICALLY CREATED in MySQL when installing the RT Ticketing software (Open Source Ticketing applicaton). On the TARGET server RT (newer version) was installed by myself. It automatically installed the rt3 database on the TARGET MySQL database with all the tables you see from the older phpMyAdmin screenshot I sent (screenshot is from SOURCE, but the tables created on the TARGET server are identical [see attachment]). Just so you are aware of this.....

RT3-DB.jpg
I was hoping the entire attachments would be a BLOB value. But it is splitted accros mutilple tables (headers and content seperated for example). Without sample data, it is impossible for me to create php scripts.

I downloaded the open-source software, but this is written in perl, which I do not know that good to debug possible content of this table.

Can you export 10 rows of data from this table so I can recreate it and try to write the export and import script. You can do this with phpmyadmin. There is a setting for this in the Export tab that allows you do dump a small piece of the table.

While I am writing this I thought of something you could also try: Why not exporting the table with phmyadmin in files of just 10000 records. It would be a massive job to do this, but it might work. Just exporting and importing with a small amount at the time could solve your problem, but would take a lot of time. Just try it with a few files to find out if this is a possible solution. If I can't manage to figure out the scripts, you would still have a backup solution.

----

Did you by any change contacted the company behind this open-source software? They might be able to also help you with this.

I also wonder if this software has a possible setting to store attachments on disk instead of database. They might even have a script in their system to migrate all database attachments to files on disk. If I where you I would certainly address this issue to them.
I will export the 10 rows tonight. I cannot do this while the db is being used (or can I?).

I will also try the 10000-at-a-time export/import as well tonight.

Looking through RT's RTSite_Config.pm file - RT_Config.pm (default):


# }}}

# {{{ Database Configuration

# Database driver beeing used. Case matters
# Valid types are "mysql", "Oracle" and "Pg"

Set($DatabaseType , 'mysql');

# The domain name of your database server
# If you're running mysql and it's on localhost,
# leave it blank for enhanced performance
Set($DatabaseHost   , 'localhost');
Set($DatabaseRTHost , 'localhost');

# The port that your database server is running on.  Ignored unless it's
# a positive integer. It's usually safe to leave this blank
Set($DatabasePort , '');

#The name of the database user (inside the database)
Set($DatabaseUser , 'rt_user');

# Password the DatabaseUser should use to access the database
Set($DatabasePassword , 'rt_pass');

# The name of the RT's database on your database server
Set($DatabaseName , 'rt3');

# If you're using Postgres and have compiled in SSL support,
# set DatabaseRequireSSL to 1 to turn on SSL communication
Set($DatabaseRequireSSL , undef);

# }}}                                                                    

Not  sure how your suggestion would be added to this script. I will delve further into this.

you certainly can export with phpmyadmin while the database is running.
OK. Good to know!

SOURCE: phpMyAdmin - Within export for table "Attachments" I see "Dump 8,379,266 (this number is constantly changing)" rows starting at record 0". So how would I back up 10,000 records at a time (see attachment). I guess the real question, given this phpMyAdmin screen, is: How can I view the records?

Attachment-table-to-EX-Exch.jpg
I successfully imported 10000 rows from the SOURCE MySQL to the TARGET MySQL:

Export (SOURCE): phpMyAdmin
Import: mysql -u root -p -b csrt3 < Attachments.sql

Now the question is: For the next 10000 rows do I: mysql -u root -p -b csrt3 < Attachments.sql  again? Won't this overwrite the recently imported Attachments DB and replace it with rows 10001 through 20000? So the question is: How do I import  individual  tables using cli? Something like: mysql -u root -p -b Attachments < Attachments.sql

mysql -u root -p -b csrt3 < Attachments.sql would just replace existing table.
Hi,

This is certainly good news that you were able to import 10000 records.

What you need to do for the next 10000 is uncheck the Structure part, and fill out the Dump parameters. I attached a screenshot with some markings. For the next 10000 rows you will need to fill out 10000 & 10001, the next batch 10000 - 200001

This way the table will not be created each time you do an import. It will only import data. You should always use the same command for each import:

# mysql -u root -p csrt3 < Attachments.sql

If the table structure is not exported, but only data, it will just add the following 10000 rows and leave everything existing as is. The reason that you don't need the table name in the command is because this is included in the .sql file like this:

INSERT INTO `Attachments` VALUES ()

Therefore you only need to tell mysql on which database it should be executed (csrt3 in this case)
markings.jpg


For the next 10000 rows you will need to fill out 10000 & 10001:

Not sure what you mean by filling out 10000 & 10001. Isn't 10,000 already added? Won't 10,001 be added when I do the 10,000-20,001?

the next batch 10000 - 200001:

I am assuming that you mean 10,000-20,001 not 10,000-200,001.
I am now doing (with Structure unchecked), "Dump 10,000 row(s) starting at record # 10,001."
Ofcourse, you are right, i mixed some numbers. I assumed it would start with 1, but it is 0.

It says "Dump x records starting from y", where you would start like this:

x - y (range)
10000 - 0 (0 -> 9999)
10000 - 10000 (10000 - 19999)
10000 - 20000 (20000 - 29999)
...

The first 10000 rows should be exported with table structure, the rest without. You already have 10000 records in you database imported, so if they are ok, just uncheck the table structure option for the rest of the records. If you want to start over to make sure, delete the table on TARGET server, and start with 10000-0 while making sure you have table structure enabled for the export. The rest of the exports again without table structure.
So far so good. I only imported 20,000 rows so far. I imported all the other tables and all are working! I would like to keep this connection with you until I go live with this just to make sure (that is if you don't mind, of course). Most likely this will go live a week from Monday (will go live on Monday, August 11th).
ASKER CERTIFIED SOLUTION
Avatar of DrBrainiac
DrBrainiac

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I tried this yesterday. I successfully exported/imported 30,000 rows, however I received an error message that had nothing to do with memory. It was related to how a row was laid out.  After it was done importing I noticed that there were not 30,000 rows (~27,000). I'm not sure if it just stopped importing once it saw the error, or not. I will have to test this on Monday. If I get the error again I will send it to you. Thanks again!!
OK. I am exporting via SOURCE using phpMyAdmin. After initial export I am unchecking the "Structure" box and have been importing (via FTP -> mysql -u root -p csrt3 < Attachments.sql. All has been working well so far, however I haven't noticed any new tickets being added to the new RT Ticketing. All paths are correct in RT's config. I have, however, noticed (see attachments) that "Attachments2's" INDEX, Cardinality as well as "Attachments3's" INDEX, Cardinality numbers are identical to the "PRIMARY" Index on the TARGET server. Is this normal?

Attachment-table.jpg
index-issue.jpg
"I haven't noticed any new tickets being added to the new RT Ticketing." May be due to the fact that I am still adding the "Attachments" rows. I am not sure though and just want to be sure before I spend another day adding rows to "Attachments" in 1.5GB increments.
Well, after it was all said and done, my boss decided that he wants to keep things the way they are. Three months of work and it will never go live! I really learned a great deal however, so it was not in the least a lost cause!

The last 4,000 tickets (newest), for some reason, would not migrate over. Perhaps this was due to them still being live, thus locked? Not sure.

Thank you very much for all your amazing suggestions and giving me the time of day! It really helped me along the way!!

I wish there was a 1,000 points button I could select for you!!!


Hands down, the BEST person I have ever worked with at Ex Ex! TOP NOTCH!!! :~)