HELP AGAIN! : STILL NEED to greatly speed up writes without a "batch" option !?

Dear Experts,

I already awarded points on this question to someone who was kind enough to explain how to string together a bunch of inserts into a single statement.  However, it turns out to be only 10% faster than what I'm currently doing.  I need something that will be at least %400 faster!  Here is the original post:

HELP!

I need a quick fix to speed up an application.  I’m doing a read-a-record, write-a-record logic, with the writes going into an existing MS SQL table.  Previously, I had used a PreparedStatement, in conjunction with doing my own .addBatch() and .commit()s every 1000 records.

However, due to larger db changes (which I can’t change), the addBatch() no longer has an effect.  (Technically, the db recovery option was changed to “simple”, so MS SQL ignores autocommit altogether and writes the record immediately when I do .addBatch()).

I need to at least quadruple the current speed.  The table I’m writing to is (uniquely) keyed, but I can be guaranteed that new records will be unique.

The only thing I can think of right now would be to actually write the record temporarily to a text file, do a bulk insert to a staging table, and then do an outer join.  (Did I think of that myself? NO!  I learned it at Experts Exchange!)  

But it seems nuts to have to write the records to a text file in order to take advantage of bulk insert.  I have thought about writing the records directly to an un-keyed staging table, but with “simple” recovery mode, those writes would themselves happen one at a time, which I think would be very slow.  It seems much faster to actually write to a text file.

Is there another way within Java that I could say “Insert these 1000 records all at once”?

Am I missing something here?  Under the circumstances,  what should I do?  HELP!

THANKS!
--BrianMc1958
BrianMc1958Asked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

CEHJCommented:
Try dropping the index before insertion and recreating it after commit
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
BrianMc1958Author Commented:
For anyone reading this, I think I found my answer:

sb.append("INSERT INTO myTable SELECT 'Joe', 123\n ");
for (int i = 0; i < 1000; i++)
{
  sb.append("UNION ALL SELECT 'Bob', 456 \n ");
}

In my first test here, it's 1000% faster.  Oddly, I get this message if I increase the number of iterations much beyond 1000:

  The query processor ran out of stack space during query optimization.

Of course, you're making one very long query here...!


For points, I have also heard many times that dropping indexes and recreating them later helps a lot.  Plus the fact that CEHJ has helped me out so many zillions of times in the past...

--BrianMc1958
0
BrianMc1958Author Commented:
Oops:  In the above code, remove the two "\n".  It will work, but they are not needed.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Java

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.