Locked table with JDBC Statement.executeQuery()

To anyone that can help, I have a problem with a Java application I am developing that interfaces to an MS SQL DB. I am running JDK 1.5 on Win2K Server using the ODBC bridge driver in JDBC. I confirmed that I have a locked table through the SQL Enterprise Manager logs. The strange thing is that it doesn't lock up until I have processed 3000 or so records. Since I am incorporating re-use code into my application, I did not look closely until now, and discover that I have code that has nested ResultSet objects trying to access the same tables like the example below:

Connection con1 = null;
Connection con2 = null;
Statement stmt1 = null;
Statement stmt2 = null;

public void Method1InClass()
   String sqlQuery = null;
   String aResultString = null;
   ResultSet results1 = null;

   sqlQuery = "SELECT value1A.tableA, value1B.tableB
                     FROM tableA, tableB
                     WHERE tableA.id = tableB.id";

   con1 = DriverManager.getConnection(url,"","");
   stmt1 = con1.createStatement();



      aResultString = results1.getString(value1A);


   con1 = null;

private void Method2InClass(String aName)
   String sqlQuery = null;
   String aResultString = null;
   ResultSet results2 = null;

   sqlQuery = "SELECT value2A.tableA, value2B.tableB
                     FROM tableA, tableB
                     WHERE tableA.value1A = aName
                     AND tableB.value2B = '<STRING>'";

   con2 = DriverManager.getConnection(url,"","");
   stmt2 = con2.createStatement();

   stmt2.executeQuery(sqlQuery); <--- CODE HANGS HERE
                                                        AFTER 3000 ROWS


   con2 = null;
   stmt2 = null;
Maybe there is something I am missing, but I would have expected the code to hang/lockup alot quicker than 3000 records, on account of the multiple ResultSet objects squashing each other from the get go. Stranger still, I do not have this problem on another DB that I access. The difference between that DB and the one I am using now, is that the DB that processes tens of thousands of records with no problem, has pointers to data in the DB, on the local disk. The DB that I am using which fails at ~3000 records, has referential data on a remote server through Samba. Any guesses as to what I can do to get past this problem (aside from re-coding things....which I'll probably end up doing anyway) or what I am doing wrong ??
Who is Participating?
jazzkiAuthor Commented:
P.S. I also unsuccessfully tried processing the ResultSet objects in batches by trying to use Statement.setFetchSize(), but I get a "java.sql.SQLException: Invalid Fetch Size" error whenever I run my app. Don't understand.
Mayank SAssociate Director - Product EngineeringCommented:
>> MS SQL DB. I am running JDK 1.5 on Win2K Server using the ODBC bridge driver in JDBC

Try the M$ JDBC driver instead.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.