While InnoDB is a rather resilient storage driver, it is still possible it can get corrupted due to a disk failure or power outage. When it does happen, it’s quite possible your database will crash during startup with the following lines in the log:
170821 9:14:58 InnoDB: Assertion failure in thread 140081658566400 in file trx0purge.c line 848 InnoDB: Failing assertion: purge_sys->purge_trx_no <= purge_sys->rseg->last_trx_no InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 170821 9:14:58 [ERROR] mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. To report this bug, see http://kb.askmonty.org/en/reporting-bugs We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. Server version: 5.5.35-MariaDB key_buffer_size=268435456 read_buffer_size=524288 max_used_connections=0 max_threads=153 thread_count=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 5356701 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x0x0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x0 thread_stack 0x48000 /usr/sbin/mysqld(my_print_stacktrace+0x2e)[0xa8c62e] /usr/sbin/mysqld(handle_fatal_signal+0x40b)[0x6d2f8b] /lib64/libpthread.so.0[0x320120f710] /lib64/libc.so.6(gsignal+0x35)[0x3200e32625] /lib64/libc.so.6(abort+0x175)[0x3200e33e05] /usr/sbin/mysqld[0x885d4e] /usr/sbin/mysqld[0x8861fb] /usr/sbin/mysqld[0x968292] /usr/sbin/mysqld[0x95cee7] /usr/sbin/mysqld[0x884986] /usr/sbin/mysqld[0x87a3ec] /lib64/libpthread.so.0[0x32012079d1] /lib64/libc.so.6(clone+0x6d)[0x3200ee88fd] The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 170821 09:14:58 mysqld_safe mysqld from pid file /var/lib/mysql/centeron.pid ended
Step 1 – Start mysqld in recovery mode
If the database somehow does not crash, the first step is to bring it down. If you cannot stop it gracefully, you can use kill -9 as a last resort:
service mysql stop
Then we have to set up “safe mode”. To prevent applications from trying to access the database while you’re working on recovering it, it is a good idea to change the port number temporarily.
Let’s edit the /etc/my.cnf:
[mysqld] port = 8889
That’s not all. To prevent crash on opening InnoDB tables, we must add the two following lines:
innodb_force_recovery=3 innodb_purge_threads=0
In this mode your database will be read-only. Save the my.cnf file, and attempt to start mysqld:
service mysql start
The database should be able to start now, albeit read-only.
Step 2 – Make a list of corrupted tables
The ‘scandisk’ equivalent for MySQL is mysqlcheck:
mysqlcheck --all-databases
centreon_status.nagios_timedevents OK centreon_status.nagios_timeperiod_timeranges OK centreon_status.nagios_timeperiods OK centreon_storage.acknowledgements OK centreon_storage.centreon_acl OK centreon_storage.comments OK centreon_storage.config OK centreon_storage.customvariables OK centreon_storage.data_bin warning : 1 client is using or hasn't closed the table properly error : Record at pos: 138281990 is not remove-marked error : record delete-link-chain corrupted
Note all databases which have “Error” or “Corrupted” state. You could try to recover individual tables, but I learned it is rather ineffective, as mysql will continue to crash at startup, even with the corrupted databases completely removed.
Step 3 – Backup and drop the corrupted databases
First, dump the affected databases:
mysqldump corrupted_database > corrupted_database.sql
Once you have their backup, drop them:
mysql -u root -p DROP DATABASE corrupted_database;
Step 4 a. – Restart MySQL in normal mode
Stop the database:
service mysql stop
Comment out InnoDB recovery settings in /etc/my.cnf and try to restart the database:
#innodb_force_recovery=3 #innodb_purge_threads=0
Then try to restart the database
service mysql start
Check the error log if it crashes again, then go back to step 1 and dump more databases. I ended up having to export all databases except for default internal databases.
Step 4 b. – Remove InnoDB datafiles
If you’ve exported all databases except mysql, and the database still does not want to start properly in normal mode, you have to remove ibdata1 file, which holds the InnoDB tablespace. Make sure you have backed up all your databases before you attempt this step!
Make sure to stop the database first:
service mysql stop cd /var/lib/mysql/ ls -la
You can either move or remove the files beginning with “ib*”. I’ve chosen move as a safer option:
mv ib_logfile1 ibdata1 ib_logfile0 /tmp/
Try restarting mysql. It should now go straight up.
service mysql start
Step 5 – Recreate databases, import backups
mysql -u root -p CREATE DATABASE corrupted_database;
Then import each .sql file with database backup, that we’ve created in earlier steps.
mysql corrupted_database < corrupted_database.sql
Step 6 – Reset MySQL settings back to default
Edit your /etc/my.cnf file and comment out all settings we’ve added in step 1:
[mysqld] #port = 8889 #innodb_force_recovery=3 #innodb_purge_threads=0
Save file, restart mysql.
service mysql restart
Your applications should now be able to connect to the database.