marrowyung
asked on
mysqldump to another mount drive
hi,
I am not a linux guy and when I do df -h I see this:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 59G 9.8G 47G 18% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/mapper/rhel-mysql
197G 159G 29G 85% /mysql
The problem now is , I want to do mysqldump and that dump .sql is so large that my existing volumn, can't store it !
is the default volumn in redhat linux is /dev/sda1 , which is mount on / ?
how can I use /mysql as the backup volumn ?
I read this: https://serverfault.com/questions/804270/how-to-compress-this-mysqldump
or what is the correct command for me to zip the mysqldump file so that I can restore to xtraDB cluster 8.0.19 ?
and what is the correct command to restore the zip files to xtraDB cluster 8.0.19 ?
I am not a linux guy and when I do df -h I see this:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 59G 9.8G 47G 18% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/mapper/rhel-mysql
197G 159G 29G 85% /mysql
The problem now is , I want to do mysqldump and that dump .sql is so large that my existing volumn, can't store it !
is the default volumn in redhat linux is /dev/sda1 , which is mount on / ?
how can I use /mysql as the backup volumn ?
I read this: https://serverfault.com/questions/804270/how-to-compress-this-mysqldump
or what is the correct command for me to zip the mysqldump file so that I can restore to xtraDB cluster 8.0.19 ?
and what is the correct command to restore the zip files to xtraDB cluster 8.0.19 ?
Use Tomas comment.
Either gzip or xz or zstd... with zstd being by far the most brutal/aggressive compressor, producing smallest compressed files.
Either gzip or xz or zstd... with zstd being by far the most brutal/aggressive compressor, producing smallest compressed files.
ASKER
do you know if I zip it in Windows, and winscp to that linux box, can I unzip and import that mysqldump file ?I am doing in this way!
Thomas,
"You direct the mysqldump .sql file to a filesystem location simply by setting the desired path to the .sql file like this. "
the source box do not have enough space and the target one can't even use root to ssh to it so I can't backup directly to a remote location.
Thomas,
"You direct the mysqldump .sql file to a filesystem location simply by setting the desired path to the .sql file like this. "
the source box do not have enough space and the target one can't even use root to ssh to it so I can't backup directly to a remote location.
ASKER
please also help to answer htis:
https://www.experts-exchange.com/questions/29191439/MySQL-collatoin.html
as I am not sure if collation on both size is difference, restoring DB from 5.7 to 8.0.19 will gives problem.
https://www.experts-exchange.com/questions/29191439/MySQL-collatoin.html
as I am not sure if collation on both size is difference, restoring DB from 5.7 to 8.0.19 will gives problem.
ASKER
"or in your case if you want to use /mysql as backup volume "
by /mysql in my df -h output just a mount point and we can just refer to the mount point like this:
mysqldump yourdbname > /mysql/mydbname_backup.sql ?
by /mysql in my df -h output just a mount point and we can just refer to the mount point like this:
mysqldump yourdbname > /mysql/mydbname_backup.sql ?
ASKER
Hi,
And if you want to send the stream of the mysqldump to a remote location then you simply add a ssh pipe with the desired path. Something like this.
Regards,
Tomas Helgi
by /mysql in my df -h output just a mount point and we can just refer to the mount point like this:A mount point usually has a filesystem attached to it. So, yes you set the path to the desired file location as I showed before.
And if you want to send the stream of the mysqldump to a remote location then you simply add a ssh pipe with the desired path. Something like this.
mysqldump yourdbname | gzip -9 | ssh user@remotehost "cat > /path/to/yourdbname.sql.gz"
Regards,
Tomas Helgi
You can also mount a remote file system, which is what I normally do for this type of... cross machine work...
Since sshfs syntax is... a bit... obscure, I normally place the mount command in a script or alias, so once I figure it out, I have some way to remember the dark arcane conjuration required to get the mount working.
Something like this, where I use $remote set to the actual hostname of the remote site or project (if there's no DNS resolution for the IP)...
This way you can just use normal file commands with /mnt/$remote, rather than complex pipelines.
Sometimes it's faster/easier to just mount the $remote, than figure out the pipeline command.
Since sshfs syntax is... a bit... obscure, I normally place the mount command in a script or alias, so once I figure it out, I have some way to remember the dark arcane conjuration required to get the mount working.
Something like this, where I use $remote set to the actual hostname of the remote site or project (if there's no DNS resolution for the IP)...
mkdir /mnt/$remote
echo "$pass" | sshfs $user@host:/sites/some-client/some-site/htdocs /mnt/$remote -o StrictHostKeyChecking=no -o ServerAliveInterval=5 -o workaround=rename -o password_stdin
This way you can just use normal file commands with /mnt/$remote, rather than complex pipelines.
Sometimes it's faster/easier to just mount the $remote, than figure out the pipeline command.
ASKER
"gunzip < mydname_backup.sql.gz | mysql -u [uname] -p[pass] [yourdbname] "
if that files when restore gives this error again:
? I am importing a 25GB .gz using this command and it drop out because of this error.
last night I tried to open a dump with with similiar error? I really has to remove sql_mode command one by one myself ! is there any easlier way remove the line on the fly / without opening the whole .sql file ? can linux command line argument help on this?
remove sql_mode command one by one myself shows I can restore trigger /SP, so is that mean one of the method is to export without SP /trigger and restore data first! then script out old trigger/SP from old server one by one and deploy it to new server?
load the script in mysql workbench will shows error ! the SW do not expect load a .sql file which is 45GB.
if that files when restore gives this error again:
ERROR 1231 (42000) at line 1222: Variable 'sql_mode' can't be set to the value of 'NO_AUTO_CREATE_USER'
how can we solve it with this command : gunzip < mydname_backup.sql.gz | mysql -u [uname] -p[pass] [yourdbname]? I am importing a 25GB .gz using this command and it drop out because of this error.
last night I tried to open a dump with with similiar error? I really has to remove sql_mode command one by one myself ! is there any easlier way remove the line on the fly / without opening the whole .sql file ? can linux command line argument help on this?
remove sql_mode command one by one myself shows I can restore trigger /SP, so is that mean one of the method is to export without SP /trigger and restore data first! then script out old trigger/SP from old server one by one and deploy it to new server?
load the script in mysql workbench will shows error ! the SW do not expect load a .sql file which is 45GB.
Hi,
The error
means that in your mydname_backup.sql.gz file there is a line
NO_AUTO_CREATE_USER is set by default in verions 5.x BUT is not supported in version 8 and needs to be removed from the backup file before running the restore.
Regards,
Tomas Helgi
The error
ERROR 1231 (42000) at line 1222: Variable 'sql_mode' can't be set to the value of 'NO_AUTO_CREATE_USER'
means that in your mydname_backup.sql.gz file there is a line
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE=.....
with some sql mode options including NO_AUTO_CREATE_USER in your case.NO_AUTO_CREATE_USER is set by default in verions 5.x BUT is not supported in version 8 and needs to be removed from the backup file before running the restore.
Regards,
Tomas Helgi
ASKER
"NO_AUTO_CREATE_USER is set by default in verions 5.x BUT is not supported in version 8 and needs to be removed from the backup file before running the restore. "
tks. but how can I remove this line without opening the files ? it is 45GB, too big to open , right ?
do you think it is usually exists when it create a mysql logic ? I found once I remove it for a small .sql dump files the trigger/sP will be recreate
tks. but how can I remove this line without opening the files ? it is 45GB, too big to open , right ?
do you think it is usually exists when it create a mysql logic ? I found once I remove it for a small .sql dump files the trigger/sP will be recreate
Hi,
This line is in the first 50 lines of the file so you need to figure out a way to open the file and edit it.
Some editors can open large files "one page" at a time. In linux you can split the files into multiple parts using the "split command", then edit the first part where the sql_mode line is and then concatenate the parts back into one file.
The linux sed command can also be usefull here.
Something like ths
Regards,
Tomas Helgi
This line is in the first 50 lines of the file so you need to figure out a way to open the file and edit it.
Some editors can open large files "one page" at a time. In linux you can split the files into multiple parts using the "split command", then edit the first part where the sql_mode line is and then concatenate the parts back into one file.
The linux sed command can also be usefull here.
Something like ths
sed -i 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql
orsed -i.bak 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql
where the later command creates a .bak file (backup of the orginal file). Regards,
Tomas Helgi
ASKER
hi,
tks.
"sed -i 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql "
this means in this file: yourbackupfile.sql . remove anything like: "/NO_AUTO_CREATE_USER " and save to yourbackupfile.sql ?
tks.
"sed -i 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql "
this means in this file: yourbackupfile.sql . remove anything like: "/NO_AUTO_CREATE_USER " and save to yourbackupfile.sql ?
Hi,
Yes, see this link https://linuxize.com/post/how-to-use-sed-to-find-and-replace-string-in-files/
You can put the time command in front of sed to see how long this would take but I would expect that it will take several minutes for sed to process huge file.
Regards,
Tomas Helgi
Yes, see this link https://linuxize.com/post/how-to-use-sed-to-find-and-replace-string-in-files/
You can put the time command in front of sed to see how long this would take but I would expect that it will take several minutes for sed to process huge file.
time sed -i 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql
Regards,
Tomas Helgi
ASKER
btw , how you open a 45GB .sql up ?
so this one:
"sed -i 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql "
is not going to over load the system like an editor open a 45GB .sql file and finally throw an error and stop ?
so this one:
"sed -i 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql "
is not going to over load the system like an editor open a 45GB .sql file and finally throw an error and stop ?
Hi,
Sed process the file line by line. Sed works also on streams so it can search and replace strings in a piped commands as well.
Meaning that each line is read and processed and written back to the file.
The only time factor is the Disk I/O that takes to read and then write the line back to the file.
Regards,
Tomas Helgi
Sed process the file line by line. Sed works also on streams so it can search and replace strings in a piped commands as well.
Meaning that each line is read and processed and written back to the file.
The only time factor is the Disk I/O that takes to read and then write the line back to the file.
Regards,
Tomas Helgi
ASKER
"Sed works also on streams so it can search and replace strings in a piped commands as well."
good and tks.
but one thing, can it combine with Mysqldump to, on the fly, the remove that line when mysqldump writing to the target.sql ?
what if the .sql file is .gz file ? can sed handle it?
good and tks.
but one thing, can it combine with Mysqldump to, on the fly, the remove that line when mysqldump writing to the target.sql ?
what if the .sql file is .gz file ? can sed handle it?
Hi,
As I said it works on streams so you can do the search and replace "on the fly" in a piped fashion.
Meaning that, yes you can combine it with mysqldump and edit the backupfile as it gets created.
Something like
Regards,
Tomas Helgi
As I said it works on streams so you can do the search and replace "on the fly" in a piped fashion.
Meaning that, yes you can combine it with mysqldump and edit the backupfile as it gets created.
Something like
mysqldump .... | sed 's/NO_AUTO_CREATE_USER//g' | gzip -9 > mybackupfile.sql.gz
Regards,
Tomas Helgi
ASKER
OWWO, let me try man, you are very helpful!
and I am testing this:
"sed -i 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql "
but how about remove the line on an already zipped gz file ?
update you soon.
and I am testing this:
"sed -i 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql "
but how about remove the line on an already zipped gz file ?
update you soon.
Hi
It may be possible to use the unzip in a pipe but that may need some memory and for huge files it may be better to unzip to disk first.
Regards,
Tomas Helgi
but how about remove the line on an already zipped gz file ?You will need to unzip the file to disk (recommended) and process it from there and then zip the file again.
It may be possible to use the unzip in a pipe but that may need some memory and for huge files it may be better to unzip to disk first.
Regards,
Tomas Helgi
ASKER
"You will need to unzip the file to disk (recommended) and process it from there and then zip the file again.
exactly what I think ! linux command not that robust right ?
let me try and get back to you later .
exactly what I think ! linux command not that robust right ?
let me try and get back to you later .
ASKER
one thing, the error message is :
\"'sql_mode' can't be set to the value of 'NO_AUTO_CREATE_USER' "
so not just remove this: 'NO_AUTO_CREATE_USER' , right? but also sql_mode?
anyway to modify 2x message on the same mysqldump which zip on the fly too ?
\"'sql_mode' can't be set to the value of 'NO_AUTO_CREATE_USER' "
so not just remove this: 'NO_AUTO_CREATE_USER' , right? but also sql_mode?
anyway to modify 2x message on the same mysqldump which zip on the fly too ?
Resolving NO_AUTO_CREATE_USER is a new issue (different from original question).
Best to close out this question + open a new question to resolve the NO_AUTO_CREATE_USER problem.
Best to close out this question + open a new question to resolve the NO_AUTO_CREATE_USER problem.
ASKER
ASKER
one thing, this:
sed -i.bak 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql
is that meant it will remove the WHOLE LINE containing NO_AUTO_CREATE_USER ? is it possible to tell SED just remove the WHOLE LINE for me?
sed -i.bak 's/NO_AUTO_CREATE_USER//g' yourbackupfile.sql
is that meant it will remove the WHOLE LINE containing NO_AUTO_CREATE_USER ? is it possible to tell SED just remove the WHOLE LINE for me?
Hi,
To remove lines containing some pattern you issue
You need to be exactly sure what each options do for your database configuration so that you know what to expect if you remove the options.
Regards,
Tomas Helgi
is that meant it will remove the WHOLE LINE containing NO_AUTO_CREATE_USER ? is it possible to tell SED just remove the WHOLE LINE for me?No it will remove only the desired words not line.
To remove lines containing some pattern you issue
sed '/NO_AUTO_CREATE_USER/d'
But as I said here you should not remove whole sql_mode and all options only those that are incompatible with version 8.0 You might risk of losing intended behavior otherwise. You need to be exactly sure what each options do for your database configuration so that you know what to expect if you remove the options.
Regards,
Tomas Helgi
ASKER
"But as I said here you should not remove whole sql_mode and all options only those that are incompatible with version 8.0 You might risk of losing intended behavior otherwise."
ok, from my point of view, I think sql_mode is also needed needed and as in the dump file on other DB I have this problme too and i remove sql_mode as well, it give me back the SP I needed.
tks anyway.
ok, from my point of view, I think sql_mode is also needed needed and as in the dump file on other DB I have this problme too and i remove sql_mode as well, it give me back the SP I needed.
tks anyway.
This question needs an answer!
Become an EE member today
7 DAY FREE TRIALMembers can start a 7-Day Free trial then enjoy unlimited access to the platform.
View membership options
or
Learn why we charge membership fees
We get it - no one likes a content blocker. Take one extra minute and find out why we block content.
You direct the mysqldump .sql file to a filesystem location simply by setting the desired path to the .sql file like this.
Open in new window
or in your case if you want to use /mysql as backup volumeOpen in new window
If you want to compress the file on the fly you can do it simply by piping the stream through gzip like thisOpen in new window
gzip -9 is the best compression ratio that you can use.To restore a gzipped .sql file to mysql you simply run a command like this
Open in new window
Regards,
Tomas Helgi