• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 859
  • Last Modified:

MYSql Data insertion issue: The total number of locks exceeds the lock table size

I am trying to insert 20000 records into MYSQL database table. I am using Open JPA to insert data .

MYSQL Database Engine is InnoDB. I configured buffered pool size as innodb_buffer_pool_size=2048M which is equivalent to 2 GB.  I increased to 12GB, but still i am facing the same problem. Can any suggest me the way to resolve the issue.

Detailed stack trace is provided below.

Caused by: <openjpa-1.2.0-r422266:683325 nonfatal general error> org.apache.openjpa.persistence.PersistenceException: The total number of locks exceeds the lock table size {prepstmnt 1122871931 INSERT INTO TrendHour (avgValueOH, avgValueRH, maxValueOH, maxValueRH, minValueOH, minValueRH, periodEnd, periodStart, stdDevOH, stdDevRH, totalRecordsOH, totalRecordsRH, objectId, resourceId, resourceTypeId, parameterGroupId, parameterId) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [params=(double) 1.894, (double) 1.894, (double) 2.661, (double) 2.661, (double) 0.782, (double) 0.782, (Timestamp) 2011-04-26 14:59:59.0, (Timestamp) 2011-04-26 14:00:00.0, (double) 0.6339328, (double) 0.6339328, (int) 5, (int) 5, (int) 1179, (int) 11539, (null) null, (int) 18, (int) 22]} [code=1206, state=HY000]
FailedObject: netvalence.capstone.manager.database.tables.TrendHourTO@47969363
      at org.apache.openjpa.jdbc.sql.DBDictionary.narrow(DBDictionary.java:4238)
      at org.apache.openjpa.jdbc.sql.DBDictionary.newStoreException(DBDictionary.java:4203)
      at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:102)
      at org.apache.openjpa.jdbc.sql.SQLExceptions.getStore(SQLExceptions.java:72)
      at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.flushAndUpdate(PreparedStatementManagerImpl.java:131)
      at org.apache.openjpa.jdbc.kernel.BatchingPreparedStatementManagerImpl.flushAndUpdate(BatchingPreparedStatementManagerImpl.java:82)
      at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.flushInternal(PreparedStatementManagerImpl.java:89)
      at org.apache.openjpa.jdbc.kernel.PreparedStatementManagerImpl.flush(PreparedStatementManagerImpl.java:72)
      at org.apache.openjpa.jdbc.kernel.ConstraintUpdateManager.flush(ConstraintUpdateManager.java:543)
      at org.apache.openjpa.jdbc.kernel.ConstraintUpdateManager.flush(ConstraintUpdateManager.java:105)
      at org.apache.openjpa.jdbc.kernel.BatchingConstraintUpdateManager.flush(BatchingConstraintUpdateManager.java:59)
      at org.apache.openjpa.jdbc.kernel.AbstractUpdateManager.flush(AbstractUpdateManager.java:89)
      at org.apache.openjpa.jdbc.kernel.AbstractUpdateManager.flush(AbstractUpdateManager.java:72)
      at org.apache.openjpa.jdbc.kernel.JDBCStoreManager.flush(JDBCStoreManager.java:655)
      at org.apache.openjpa.kernel.DelegatingStoreManager.flush(DelegatingStoreManager.java:130)
0
sathish_raos
Asked:
sathish_raos
  • 3
  • 3
1 Solution
 
brutaldevCommented:
You are correct in attempting to increase the buffer size by increasing the setting value of innodb_buffer_pool_size. However this is not always a fool proof way to avoid this error when working with large amounts of data. In your case the setting is massive, be aware that you need to restart the MySQL service for this setting to take effect so if you haven't done that then I would imagine that's your problem.

If all else fails you are going to have to split up the inserts into smaller batches (for instance 20 inserts of a 1000 records each).
0
 
sathish_raosAuthor Commented:
I restarted MYSQL server after changing the value of "innodb_buffer_pool_size", but still I am facing the problem.
0
 
brutaldevCommented:
Any way that you can split the inserts up into smaller batches?
0
Transaction-level recovery for Oracle database

Veeam Explore for Oracle delivers low RTOs and RPOs with agentless transaction log backup and transaction-level recovery of Oracle databases. You can restore the database to a precise point in time, even to a specific transaction.

 
sathish_raosAuthor Commented:
I can do that, do do that I have to change lot of code.

Is there any way other than the above 2 solutions?
0
 
brutaldevCommented:
Unfortunately this is major drawback in the way locking is implemented with InnoDB that I've been waiting for a fix for as well. The bug has been known for years now (http://bugs.mysql.com/bug.php?id=15667) but people just work around it.

Here is some more reading in a recent article about it, still no apparent fix though even though people reckon setting the memory limit high enough will get it to work. And the final thing I can leave you with is an article that guides you through trying to figure out how much the lock table is using to try and understand the issue better: http://www.mysqldbahelp.com/2010/02/what-error-total-number-of-locks.html
0
 
sathish_raosAuthor Commented:
The suggested solution is helpful
0

Featured Post

Restore individual SQL databases with ease

Veeam Explorer for Microsoft SQL Server delivers an easy-to-use, wizard-driven interface for restoring your databases from a backup. No expert SQL background required. Web interface provides a complete view of all available SQL databases to simplify the recovery of lost database

  • 3
  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now