nixj14
asked on
Java/JDBC utilizing memory properly w/o thrashing
I've been given a very simple task of writing a process that will transfer data from one server/datbase to another server/database. Seems easy. The hard part that I'm having to deal with is the amount of data that I'm having to transfer (anywhere from a million to 50 million records). I'm trying to design my process properly so that it takes into account memory resources when running, but I'm still getting a large amount of thrashing. So my question is, how can I take this simple task and make it run optimally.
Here's a snippet of what I got so far.
..............
//grab 1st connection
Properties props = new java.util.Properties();
props.put("user", userid);
props.put("password", password);
props.put("block size", "512");
conn = DriverManager.getConnectio n(url, props);
conn.setTransactionIsolati on(conn.TR ANSACTION_ NONE);
Statement st = conn.createStatement(Resul tSet.TYPE_ FORWARD_ON LY,ResultS et.CONCUR_ READ_ONLY) ;
ResultSet rs = st.executeQuery("...")
...
//grab 2nd connection
Properties props = new java.util.Properties();
props.put("user", userid);
props.put("password", password);
props.put("block size", "512");
conn = DriverManager.getConnectio n(url, props);
//populate
conn.setAutoCommit(false);
PreparedStatement ps = conn.prepareStatement("... ");
while (rs.next())
{
...
ps.addBatch();
}
ps.executeBatch();
conn.commit();
conn.setAutoCommit(false);
//close connections and done!
Here's a snippet of what I got so far.
..............
//grab 1st connection
Properties props = new java.util.Properties();
props.put("user", userid);
props.put("password", password);
props.put("block size", "512");
conn = DriverManager.getConnectio
conn.setTransactionIsolati
Statement st = conn.createStatement(Resul
ResultSet rs = st.executeQuery("...")
...
//grab 2nd connection
Properties props = new java.util.Properties();
props.put("user", userid);
props.put("password", password);
props.put("block size", "512");
conn = DriverManager.getConnectio
//populate
conn.setAutoCommit(false);
PreparedStatement ps = conn.prepareStatement("...
while (rs.next())
{
...
ps.addBatch();
}
ps.executeBatch();
conn.commit();
conn.setAutoCommit(false);
//close connections and done!
generally speaking memory consumption depends on JDBC driver implementation and you can't do anything to reduce it (except specially crafted SQL statements)
are you using BLOB fields ?
ASKER
sql statements are too simple to optimize, and no blob fields.
ASKER
oh, and i have no control in the JDBC driver.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Basically, I was looking for JDBC specifics on what the block sizes should be. When/if I should executeBatch after a number of inserts have been batched up, and what that specific number should be. I'm looking for performance tuning options.