Link to home
Start Free TrialLog in
Avatar of marrowyung
marrowyung

asked on

tail log backup of MySQL

hi,

I am checking the way to do the tail log backup for any MySQL, like MariaDB and Percona xtraDB cluster.

please suggest the way to do it.

tail log backup is the backup that when the MySQL dead while the latest bin log hasn't been backup yet but we need the data inside to be restore. How can we do it as we didn't backup the last transaction and mysql just crash!

Avatar of arnold
arnold
Flag of United States of America image

You replay the binary log which is for a point in time.
You could setup, master/slave type scenario...

https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog.html
Avatar of marrowyung
marrowyung

ASKER

what if the bin log is not backed up yet? still can replay? this is the concern.

"You could setup, master/slave type scenario... "

you mean we don't even need it as the slave will take over the primary as a new primary ? then we can simply recover the old MySQL primary later?

how about the recover of data of old primary is the old primary is total lost ?just reinstall the old MySQL primary and join back the cluster?

how about the primary is not total lost but later on totally out sync from the new primary ? once join back as new slave it will sync back automatically ?


"https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog.html "

you are saying the bin log is already keep dumping out and we don't have to backup tail of the log for MySQL? so what we need to do is to just replay the log in the log directory using mysqlbinlog command?

bTW, what the global variable that enable the bin log ?
Binlog=bath
You need to set which database to binary log

MySQL/slave does not auto switch.

MySQL crash, the server is still up, system filesystem will backup on schedule

You may want to work the problem from what redundancy you want and work to achieve it.

You always have a risk of data loss in a failure event, your setup, backup, DR plans is how you mitigate and minimize data loss.

In a master/slave if ypur master instance fails,and you switch the slaveinstane to master, when for master inst is restored, it would need to be reconfigured. As a new slave inst.
Depending on how long it was out will dictate whether it would merely retrieve the bin logs from the new master and onsolidate, or the duration was ong that the replication has to be initiated by way of restoring the db fro the new master.
"You always have a risk of data loss in a failure event, your setup, backup, DR plans is how you mitigate and minimize data loss. "

yeah, all need and all planned usually ! like full and differential + bin log backup.

"Binlog=bath
You need to set which database to binary log"

don't quite understand this, any example on the global variable per DB I have to do on my.cnf?

"MySQL/slave does not auto switch."
I don't understand sorry.

"In a master/slave if ypur master instance fails,and you switch the slaveinstane to master, when for master inst is restored, it would need to be reconfigured. As a new slave inst. "

rebuilt the old primary as new slave from the ground up and rejoin the cluster?

"Depending on how long it was out will dictate whether it would merely retrieve the bin logs from the new master and onsolidate, or the duration was ong that the replication has to be initiated by way of restoring the db fro the new master.

this means use backup to restore the old primary/new slave or just rebuild the old primary as new slave and join cluster depends on the recovery time? just rejoin the cluster and let IST and SST handle the replication offer no down time, right?



All the information is at your finger tips.
log-bin
https://dev.mysql.com/doc/refman/5.7/en/replication-howto-masterbaseconfig.html
| binlog_cache_size                       | 32768                |
| binlog_direct_non_transactional_updates | OFF                  |
| binlog_format                           | STATEMENT            |
| innodb_locks_unsafe_for_binlog          | OFF                  |
| log_bin                                 | ON                   |
| log_bin_trust_function_creators         | OFF                  |
| log_bin_trust_routine_creators          | OFF                  |
| max_binlog_cache_size                   | 18446744073709547520 |
| max_binlog_size                         | 1073741824           |
| sql_log_bin                             | ON                   |
| sync_binlog                             | 0                    |
+-----------------------------------------+----------------------+

Open in new window

you can set it dynamically, while adding it to my.cnf to make sure when the system reboots it will be in place. This is to avoid interruption....


Usually if the outage was lets say an hour, less then the amount of time that your binary logs on the new master are purged/rolled.
then you might be able to simply repoint the old master to be the new slave inst with a new master, at which point it will sync.
The risk/issue is if the failure occurred before the old slave was able to retrieve the last update.....
at which point you could have a mismatch.....

Depending on your setup, the master/slave inst setup is commonly a hot-standby it will not auto-switch roles on failure of the master. Unless and until you work it all out that your DBs and applications that rely on it switch along side...
"at which point you could have a mismatch..... "
but the old primary as a new slave just sync as much data as possible from the new master, right?

"Depending on your setup, the master/slave inst setup is commonly a hot-standby it will not auto-switch roles on failure of the master. "
mariaDB relies on maxscale middle tier and percona DB relies on proxySQL to failover, right?

so the bin log mysql create already include all log information including tail log and what we need to do is, know the path ,find it and restore all log using mysqlbinlog ?

"Usually if the outage was lets say an hour, less then the amount of time that your binary logs on the new master are purged/rolled.
then you might be able to simply repoint the old master to be the new slave inst with a new master, at which point it will sync. "

if more than an hour, more than the time my binary logs on the new master are rolled, it will only cause data lost but we still keep all latest transaction in the new primary, right?

"log-bin
https://dev.mysql.com/doc/refman/5.7/en/replication-howto-masterbaseconfig.html "

replication actually log based, right? any replication mysql is using is not log based ? or it is talking about MySQL innodB cluster only?

I don't see this option is there in percona PXC but replication keep running!

and I just want to create log backup, I need to do:
[mysqld]
log-bin=mysql-bin
?
where is the path to store log file then ?


SOLUTION
Avatar of theGhost_k8
theGhost_k8
Flag of India image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
theGhost_k8 ,

I am not sure if you're not able to pick the correct term for "tail log backup"  

Open in new window


Sorry it is not about point in time restore/ recovery, it is about when MySQL dead, we want any transaction log not backup has to be able to be back AFTER the failure, this is usually the hardest part of backup as we do not have the complete log backup up to the point JUST BEFORE MySQL dead!

so I am asking how can we backup the last bit of bin/transaction log from MySQL so that we can recovery ALL transaction before the MySQL dead and this is what I want to know.

arnold,

"to reverse roles relies that your new master insyance was configured for binary logging ... "

insyance = instance ?

and I will make sure that the same bin log option enabled on all nodes !

"I.e. Replay the last few transaction to confirm they exist on the new master.... "

yeah! then how can we backup the last part of the log (tail log) which we hasn't backup yet and reply it to new master ?

and how can we make sure that new master has/has not get the last bit of bin log/transaction commit yet ?

SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
"not quite sure what the emphasis is on backup on A after failure is about. "

as there are transaction on A still not replicated to B and that's why with slave we still need backup policy !
in case total lost for example of DC.

"when the master role is transferred from A to B, you can backup B."

agree ! and backup chain is not broken ?

"you manage their space consumption to avoid having the precaution of data preservation that leads to nodes becoming inaccessible or worse because they ran out of space. "

use backup retention policy to control the log space, right ?

"The point is to see whether the last few transactions committed to the server A before crash are reflected on server B or the data is present..."
yes, then how we make sure it is commit in all other slave?


Confirm you are talking mysql not MSSQL.
the binary logs are managed and have to be periodically purged by script. I do not believe they are self managed.
The backup is a filesystem level backup of the binary logs.
Much depends on your setup.
Well if the transactions are reflected in the binary log, they are likely to have replicated
Though you are taking a risk if you reprocesses a transaction log from a crashed system, A, and run it on Server B the new master instance if the transaction is not insert, but an update.

At one point, you had a circular replication (master/master) where each new record had a unique ID through the use of increment offset.
what is the circumstance now sinlge master instance with multiple slave instances in a HUB and Spoke type of topology?
"Confirm you are talking mysql not MSSQL
"
MySQL, not MS SQL

"the binary logs are managed and have to be periodically purged by script.
"
any script to share?

"I do not believe they are self managed. "

but in other ticket we talking about Gcachc and bin log, we mentioned about bin log need backup retention policy to control the log size and gcache will automatically clean up itself.

"The backup is a filesystem level backup of the binary logs. "

so this is ONLY transaction log backup available ?

"Well if the transactions are reflected in the binary log, they are likely to have replicated "
I understand, but what if not ? how to verify ? by the  /var/lib/mysql/grastate.dat  file transaction id ?

in a busy DB, transaction can wait for replication and during this time ,MySQL can down because it is too busy !

"Though you are taking a risk if you reprocesses a transaction log from a crashed system, A, and run it on Server B the new master instance if the transaction is not insert, but an update"

not quite sure what it means, insert and update transaction is different in this case?

"what is the circumstance now sinlge master instance with multiple slave instances in a HUB and Spoke type of topology? "

log lagging ? so replication will slow down and there are chances that data is not updated in all slave before primary crash!


SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
"Backing up the transactions via a filesystem bavkup is what provides a restore to a point in time. "

so there is no native MySQL bin log backup? mysqldump do not offer that ?
and must backup file copy file system level?

"https://thesubtlepath.com/mysql/binary-log-growth-handling-in-mysql/"
will take a look.


The writeout to a file is a transaction record.

Mysqldump backs up the entire data from database.

Consider the binary log as an incremental data retention.

As to the binary log you can control the size of the file before MySQL rolls and starts a new file.
"As to the binary log you can control the size of the file before MySQL rolls and starts a new file. "

how to control size?

"Mysqldump backs up the entire data from database."
this can be done by mysqldump easily

"Consider the binary log as an incremental data retention. "
then how we ONLY backup bin log ? or just enable bin log, let MySQL keep generating bin log and then backup in file system level! so when restore, just restore the latest full backup and restore the log by mysqllogbin on each bin log file one by one ?

and back to the original question, so we do not backup the tail of the log for MySQL cluster but standalone only ?

for bin log, it seems once galera replication/cluster is on, bin log is also ON automatically ?
Check the version,
Use show global variables where variable_name like '%size%'


See https://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html


You can set it dynamically, but make sure to add the entry in my.cnf to make it permanent.
and here:
https://thesubtlepath.com/mysql/binary-log-growth-handling-in-mysql/ 

seems this:

"The configuration expire_logs_days will purge flushed binary logs that are beyond the number of days that are configured here.  That means they can automatically rotate off.  If they are still being used by replication slaves, then they will be retained.  However, don’t set this value so low that your backup schedule can’t capture them….that is if your backup plan requires or desires them. "

so variable expire_logs_days will make bin log file expires automatically after slave get the transaction and commit on slave side ?

actually here:

"log-bin=mysql-bin"

the log will be save in the data directory ? or in the mysq-bin folder?

and here:

"Add it to your Backup processing:
  • By tightly integrating your binary log management process into your backups, you ensure that your fully defined process is executing as a single job flow."
so this mean relies on MySQL backup tools so that once log backup is done, the script will run to manage the log size?



Note, your backup plan, full and filesystem. The max age has to account for the retention that you want. And how your filesystem level backup work.
The auto expire will deal with filesystem storage management.
Frequency of filesystem backup and bavkup retention is the secondary backup space management
Local copie provides faster access versus having to restore from backup .....

There is a time/duration that you do not want the slave instances to be out of sync.
I think this was discussed in prior questions.
You want to make sure to detect when a slave is in an error state to have it corrected as quickly as possible versus trying to get it caught up, which is the riskiest position to be in. If the master instance fails in that timeframe....
"You want to make sure to detect when a slave is in an error state to have it corrected as quickly as possible "

how is it possible? I thought galera replication handle it automatically ?

"The auto expire will deal with filesystem storage management. "

so this one: expire_logs_days, is not going to make bin log expire and delete accordingly ?

" secondary backup space management"

how to handle secondary backup space management ?



 

tks all.