mysqldump -- too many open files

I have tried a lot of things to get around this error.

mysqldump -Q -f -u root -pmypass information_schema --lock-all-tables | gzip -c > information_schema.sql.gz
mysqldump -Q -f -u root -pmypass information_schema --skip-lock-tables | gzip -c > information_schema.sql.gz
mysqldump -Q -f -u root -pmypass information_schema --single-transaction  | gzip -c > information_schema.sql.gz

I've restarted Mysql in between.

But I keep getting screens full of errors:
mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE 'TABLES'': Out of resources when opening file '/tmp/#sql_4fa4_0.MAI' (Errcode: 24 "Too many open files") (23)
mysqldump: Couldn't execute 'show fields from `TABLESPACES`': Out of resources when opening file '/tmp/#sql_4fa4_0.MAI' (Errcode: 24 "Too many open files") (23)
mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE 'TABLESPACES'': Out of resources when opening file '/tmp/#sql_4fa4_0.MAI' (Errcode: 24 "Too many open files") (23)
mysqldump: Couldn't execute 'show fields from `TABLE_CONSTRAINTS`': Out of resources when opening file '/tmp/#sql_4fa4_0.MAI' (Errcode: 24 "Too many open files") (23)

I even tried while running mysqld_safe.
I tried shutting down the services that could be using mysql -- apache, postfix, dovecot, then restarting mysql.

Same slough of errors!

What do I need to do differently?

Thanks!
LVL 32
Daniel WilsonAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

omarfaridCommented:
What you get when you run

ulimit -a
0
matrix8086Commented:
Try to restart the mysql server and try to restart the host.

it is possible to have some corruption in the database. Try mysqlcheck and mysqlrepair. Beware: repairing database can delete the corrupted record.

I had same problems when the HDD had some bad sectors. I have cloned the HDD. I have tried mysql repairing on both HDDs, the old one  and the cloned one, with the same result: some corrupted records was deleted, but I was able to add the records manually and is all good on the new HDD.

Best regards!
0
Daniel WilsonAuthor Commented:
ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63711
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 63711
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
0
The Ultimate Tool Kit for Technolgy Solution Provi

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy for valuable how-to assets including sample agreements, checklists, flowcharts, and more!

Daniel WilsonAuthor Commented:
Without another restart, mysqlcheck and mysqlrepair followed by mysqldump ... still gets me the errors.

I'll have to try this again when I can take the site offline for a little while.

Thanks for the help.
0
omarfaridCommented:
Please see how many open files you have at the moment you get the error message. You can increase the open files limit by increasing the file descriptors limit. Please see link below on how to increase:

https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/

Below commands taken from above link.

Check the current open file descriptor limit:

# more /proc/sys/fs/file-max

To find out how many file descriptors are currently being used:

# more /proc/sys/fs/file-nr
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Daniel WilsonAuthor Commented:
Got it with a reboot.  thanks for your help!
0
omarfaridCommented:
Welcome
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
MySQL Server

From novice to tech pro — start learning today.