Link to home
Start Free TrialLog in
Avatar of BrianMc1958
BrianMc1958

asked on

HELP AGAIN! : STILL NEED to greatly speed up writes without a "batch" option !?

Dear Experts,

I already awarded points on this question to someone who was kind enough to explain how to string together a bunch of inserts into a single statement.  However, it turns out to be only 10% faster than what I'm currently doing.  I need something that will be at least %400 faster!  Here is the original post:

HELP!

I need a quick fix to speed up an application.  I’m doing a read-a-record, write-a-record logic, with the writes going into an existing MS SQL table.  Previously, I had used a PreparedStatement, in conjunction with doing my own .addBatch() and .commit()s every 1000 records.

However, due to larger db changes (which I can’t change), the addBatch() no longer has an effect.  (Technically, the db recovery option was changed to “simple”, so MS SQL ignores autocommit altogether and writes the record immediately when I do .addBatch()).

I need to at least quadruple the current speed.  The table I’m writing to is (uniquely) keyed, but I can be guaranteed that new records will be unique.

The only thing I can think of right now would be to actually write the record temporarily to a text file, do a bulk insert to a staging table, and then do an outer join.  (Did I think of that myself? NO!  I learned it at Experts Exchange!)  

But it seems nuts to have to write the records to a text file in order to take advantage of bulk insert.  I have thought about writing the records directly to an un-keyed staging table, but with “simple” recovery mode, those writes would themselves happen one at a time, which I think would be very slow.  It seems much faster to actually write to a text file.

Is there another way within Java that I could say “Insert these 1000 records all at once”?

Am I missing something here?  Under the circumstances,  what should I do?  HELP!

THANKS!
--BrianMc1958
ASKER CERTIFIED SOLUTION
Avatar of CEHJ
CEHJ
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of BrianMc1958
BrianMc1958

ASKER

For anyone reading this, I think I found my answer:

sb.append("INSERT INTO myTable SELECT 'Joe', 123\n ");
for (int i = 0; i < 1000; i++)
{
  sb.append("UNION ALL SELECT 'Bob', 456 \n ");
}

In my first test here, it's 1000% faster.  Oddly, I get this message if I increase the number of iterations much beyond 1000:

  The query processor ran out of stack space during query optimization.

Of course, you're making one very long query here...!


For points, I have also heard many times that dropping indexes and recreating them later helps a lot.  Plus the fact that CEHJ has helped me out so many zillions of times in the past...

--BrianMc1958
Oops:  In the above code, remove the two "\n".  It will work, but they are not needed.