?
Solved

Bulk updates in SYBASE using Java Batch Updates

Posted on 2009-02-21
28
Medium Priority
?
5,286 Views
Last Modified: 2013-12-29
I have to execute many updates on a table using java batch updates, but i when i try to execute my code on 22,000 records, it just hangs.

How can i optimize the updates to not use NOARCHIVELOG, as in Oracle database. How can i make my updates faster?
String updateSql = "UPDATE AMF " +
                    " SET BLOCKED_CV = ?, " +
                    " BLOCKED_FC = ? " +
                    " WHERE BRANCH_CODE = ? " +
                    " AND CURRENCY_CODE = ? " +
                    " AND GL_CODE = ? " +
                    " AND CIF_SUB_NO = ? " +
                    " AND SL_NO = ? ";
            statement = conn.prepareStatement(updateSql);
            if (totalColumns > 0) {
                //Now writing records in csv file
                for (int r = 0; r < totalRows; r++) {
 
                    rowVector = (Vector) IADataVector.get(r);
                    // only update those blocked amounts which exists on imal side
                    if (rowVector.get(INDEX_PIBAS_ADDREF) != null &&
                            rowVector.get(INDEX_IMAL_ADDREF) != null) {
 
                        //System.out.println("updateRows= " + updateRows);
 
                        blockAmount = new Double(rowVector.get(INDEX_PIBAS_SHDWBALN).toString()).doubleValue();
                        branch = new Integer(rowVector.get(INDEX_IMAL_BRANCH).toString()).intValue();
                        currency = new Integer(rowVector.get(INDEX_IMAL_CURRENCY).toString()).intValue();
                        glCode = new Long(rowVector.get(INDEX_IMAL_GL).toString()).longValue();
                        cifNo = new Long(rowVector.get(INDEX_IMAL_CIF).toString()).longValue();
                        slNO = new Integer(rowVector.get(INDEX_IMAL_SL).toString()).intValue();
 
                        if (currency == 586) {
                            statement.setDouble(1, blockAmount);
                            statement.setLong(2, 0);
                        } else {
                            statement.setLong(1, 0);
                            statement.setDouble(2, blockAmount);
                        }
                        statement.setInt(3, branch);
                        statement.setInt(4, currency);
                        statement.setLong(5, glCode);
                        statement.setLong(6, cifNo);
                        statement.setInt(7, slNO);
 
                        statement.addBatch();
                    }
                    updateRows++;
                }
                statement.executeBatch();
            }
            statement.close();
            conn.commit();

Open in new window

0
Comment
Question by:farhanitman
  • 17
  • 9
  • 2
28 Comments
 
LVL 86

Expert Comment

by:CEHJ
ID: 23699417
I see you commit, but are you sure you setAutoCommit(false) first?
0
 
LVL 86

Expert Comment

by:CEHJ
ID: 23699462
If you are, you might try to executeBatch after n updateRows instead of totalRows
0
 

Author Comment

by:farhanitman
ID: 23699470
how can i executeBatch after n updateRows ?
does the executeBatch is cleared when its finished?
0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 

Author Comment

by:farhanitman
ID: 23699475
and i used conn.setAutoCommit(false); as the first line
0
 
LVL 19

Expert Comment

by:grant300
ID: 23700666
If you have to do it this way, write a stored procedure and call it as an RPC.  That is the lowest overhead way to make 22,000 individual updates the the database.  Put the whole thing in one transaction so you can roll it all back in case of a failure; something that makes recovery much easier.

As far as performance goes, you will be lucky to get more than about 800 rows per second update rate pushing them in one at a time.  That assumes you are hitting a unique index with the WHERE clause when you are doing it.  If you do not have the fields in the WHERE clause that match all the fields of a unique index, your problem is that you are not really identifying the rows with a useful key.  Trying to do 22,000 individual updates without and index is going to be somewhere between intolerably slow and never comes back depending upon how many rows are in the target table.

What indexes do you have on the table?  Also, have you updated the statistics lately?  You should be using UPDATE INDEX STATISTICS <tablename> and be sure to issue an sp_recompile on the table as well so that any stored procs or query plans get updated.

Try issuing an update in a query tool with SHOWPLAN turned on.  This should give you some idea what the query plan is and why it is taking so long.  This is also a real life demonstration of why you should never put SQL in an application program.  With the SQL in stored procedures, you can write, test, debug, and tune with tools and much more effeciently than you can with the SQL trapped in a client application.

I have a general design question though.  What is the source of the list of 22,000 records to be updated?  Are you getting a flat file from someplace or are you pulling the information from the database in another piece of code?

If you are supplied with a flat file or otherwise source the list of records from outside the database, the preferred method is to load it into a working table and then do an Update/Join between the working and target tables.  You can use the BCP utility or even the bulk client interface to get the data loaded in a hurry without writing a bunch of log information.

If, on the other hand, you are getting the list from some other query of the database, you should be doing the whole thing in T-SQL and never even pulling the list down to the application, Java or otherwise.

Regards,
Bill
0
 

Author Comment

by:farhanitman
ID: 23708112
In reply to grant300:
I am working on a reconciliation program between two different databases, Oracle and Sybase.
I have to query few columns from an Oracle database, and few columns from Sybase database, and then show the results of reconciliation on a JTable on the application, After that when user presses a button, i have to copy the value of an oracle database column to the sybase database columns.
So i cannot use a database procedure in sybase because the source is oracle database.
Also columns in my update where clause are the primary key of the table.
What will you suggest?
0
 

Author Comment

by:farhanitman
ID: 23708220
Furthermore when i tried the same design of java batch updates, as i am doing here also, took somewhere between 1-2 minutes only while inserting over 0.2million records, using NOLOGGING hint, in an oracle database, so i was so astonished to know that sybase is taking so long to update only 22,000 records.
0
 
LVL 19

Expert Comment

by:grant300
ID: 23713446
First, you are comparing apples and oranges.  Doing unlogged (or rather minimally logged) inserts is significantly different than doing updates.
- You cannot turn off the logging for updates
- You cannot use the bulk load mechanism
- Updates are more work than Inserts

2000 rows per second for bulk inserts is not really very impressive.  if you are doing it across a network, you are probably living with some latencies.

We still don't have many specifics to work with here.  We still don't know what version of Sybase you are using.

Did you update statistics in Sybase as I suggested earlier?
Did you try running one of the updates manually in a query tool with SHOWPLAN set ON?  Post the query plan where you get it.

If you need the most speed out of this thing, you will need to bulk insert the rows into a working table and then do the update en mass.  Sending 22,000 update statements to any database is going to be relatively slow; think on the order of 30 seconds.

Regards,
Bill


0
 

Author Comment

by:farhanitman
ID: 23718410
I dont have much experties with sybase, so i tried to update statistics with rebuild reorg command on my table, then i changed my code also (see attached code), with sending execute batches with n update(every 600 statements), so when i ran it, it was pretty fast in the beggining till it processed 2400 records, like 4 or 5 secs, but after 2400 records, it got really slow, and was getting slower n slower.
The columns in my WHERE clause is the PK of the table.
Sorry but i only have SQL Advantage, so i cannot tell you about the Query plan, if you can suggest a free tool for sybase which i can use for finding query plan, that would be nice.

conn.setAutoCommit(false);
            String updateSql = "UPDATE AMF " +
                    " SET BLOCKED_CV = ?, " +
                    " BLOCKED_FC = ? " +
                    " WHERE COMP_CODE = 1 " +
                    " AND BRANCH_CODE = ? " +
                    " AND CURRENCY_CODE = ? " +
                    " AND GL_CODE = ? " +
                    " AND CIF_SUB_NO = ? " +
                    " AND SL_NO = ? ";
            statement = conn.prepareStatement(updateSql);
            if (totalColumns > 0) {
                //Now writing records in csv file
                for (int r = 0; r < totalRows; r++) {
 
                    rowVector = (Vector) IADataVector.get(r);
                    // only update those blocked amounts which exists on imal side
                    if (rowVector.get(INDEX_PIBAS_ADDREF) != null &&
                            rowVector.get(INDEX_IMAL_ADDREF) != null) {
 
                        //System.out.println("updateRows= " + updateRows);
 
                        blockAmount = new Double(rowVector.get(INDEX_PIBAS_SHDWBALN).toString()).doubleValue();
                        branch = new Integer(rowVector.get(INDEX_IMAL_BRANCH).toString()).intValue();
                        currency = new Integer(rowVector.get(INDEX_IMAL_CURRENCY).toString()).intValue();
                        glCode = new Long(rowVector.get(INDEX_IMAL_GL).toString()).longValue();
                        cifNo = new Long(rowVector.get(INDEX_IMAL_CIF).toString()).longValue();
                        slNO = new Integer(rowVector.get(INDEX_IMAL_SL).toString()).intValue();
 
                        if (currency == 586) {
                            statement.setDouble(1, blockAmount);
                            statement.setLong(2, 0);
                        } else {
                            statement.setLong(1, 0);
                            statement.setDouble(2, blockAmount);
                        }
                        statement.setInt(3, branch);
                        statement.setInt(4, currency);
                        statement.setLong(5, glCode);
                        statement.setLong(6, cifNo);
                        statement.setInt(7, slNO);
 
                        statement.addBatch();
                    }
                    updateRows++;
                    if(updateRows % 600 == 0) {
                        System.out.println("updateRows = " + updateRows);
                        // executing batch after every n=600 update statements have been added
                        statement.executeBatch();
                        statement.clearBatch();
                        conn.commit();
                    }
                }
                System.out.println("updateRows = " + updateRows);
                statement.executeBatch();

Open in new window

0
 

Author Comment

by:farhanitman
ID: 23718425
Sybase version is ASE 12.5.3
0
 
LVL 19

Expert Comment

by:grant300
ID: 23718625
just issue the command "set showplan on" in SQL Advantage.  There may also be a menu option to turn it on as well.  The query plan will be displayed when you run the query.

Not to put too fine a point on it buy I am really suspicious that those 5 fields are actually the primary key.

In SQL Advantage, issue an "sp_help AMF" and post the results.  Post the query plan output as well.

Using the executeBatch that way you are really only sending the commands in a batch, they are not really executing any differently.

As I said before, you have two choices; write a stored procedure and call it as an RPC or bulk load the data into a working table and then do a single update.  You are wasting your time trying to get this to run well as, best case, it is still slower than the two alternatives I outlined and you are going to have a lot of problems tuning it.

Also, use UPDATE INDEX STATISTICS on the table.

Regards,
Bill
0
 

Author Comment

by:farhanitman
ID: 23719092
I am attaching both the sp_help and query plan as a txt file.
As i wrote earlier i am working on a reconciliation program between two different databases, ORACLE and SYBASE, so i have to reconcile data dynamically and show the result to the user in a JTable, and when user want to update a column in sybase table using a column in an oracle table, i have to send this as an update from the values displayed in a JTable.
I cant use a database procedure as the source data resides in oracle database, or to be more precise, on JTable after reconciliation.

sp-help-AMF.txt
query-plan-amf.txt
0
 

Author Comment

by:farhanitman
ID: 23719109
i will try to bulk insert the result of JTable in a temporary table in sybase and the update the AMF table in sybase.
0
 

Author Comment

by:farhanitman
ID: 23719116
how can i bulk insert data into Sybase table?
0
 
LVL 19

Expert Comment

by:grant300
ID: 23723382
Looking at the SP-HELP output I notice several things right off the bat.

You have two identical indexes on the table, the PK _AMF and IDX_AMF_PK.  The PK is clustered and the IDX is not.  This is a total waste and will only slow the system down.

Next, you have two more indexes, AMF_IDX6 and AMF_(DX_STATUS,  with the same leading columns as IDX_PK_AMF with additional column(s).  Someone built these as coverage indexes but then really screwed up by not making them unique.  They are obviously unique because they share the same columns as the PK and IDX_PK_AMF.  The problem is that non-unique indexes present more overhead than unique ones.

The second problem with this scheme is that you have greatly increased the overhead of index maintenance.  Every time you touch a record (insert, delete, or non-inplace update) you have to pay to maintain the all these indexes.  Worse yet, all that index maitenance is logged.  Log writes cannot be cached for an extended period the way data page writes can live in memory until the next checkpoint so they quickly become a bottleneck.

Overall system performance would be better served by crushing this mess into just one index.  Were it me, I would drop all 4 of them and create AMF_PSUEDO_PK as Unique on COMP_CODE, BRANCH_CODE, CURRENCY_CODE, GL_CODE, CIF_SUB_NO, SL_NO, and STATUS.  Adding the coverage field of STATUS does not hurt your "primary key" but retains the performance bump someone was looking for.  Dump AMF_IDX6 altogether as the overhead does not justtify dragging it around.  Particularly if the PK is clustered, a coverage index with that many columns does not buy you much.  It was probably just a crutch for other design, query optimization, and/or system tuning issues.

Next, all those helper indexes that somebody added to speed up some query or process are really a mess.  None of them are unique, and almost all the columns in the table allow nulls.  This is really bad juju and, quite frankly, I surprised that you get reasonable performance out of this thing.  Sybase is saving somebodies butt.

At the very least, there should be an identify column added to the table and that field added to those helper indexes to make them unique.  In addition, the columns used in those indexes should be made non-nullable and some kind of default value put in place wherever possible.  Non-unique index with that many nulls is like whearing a mill stone around your neck.

I also noticed that the BLOCKED_CV and BLOCKED_FC are NULLable so, depending on the starting values, the database cannot do inplace updates.  That means that there is not enough room for the new version of the row (any value is longer than a null) where the row is currently sitting so it has to move the row someplace else.  The update becomes a full fledged delete and insert with all the indexes having to be modified.

Now what is really bad is that your current PK is clustered.  That means that as the database has to split pages to make room for the bigger rows, it has to actually reorg the table.  The clustered index in All Pages Locked (APL) mode, keeps the table rows as leaves of the index.  It therefore keeps the rows in physical ordered squence.  When you split pages in the middle of a clustered index, you force the database to push the entire page chain beyond that point down to make room for the new page in the middle.  This, I believe, is where your performance degradation is coming from.

Write after you do a reorg, there is some free space on each page.  The first updates that get peppered through the system are absorbed in that free space.  After a while, that free space gets eaten up as more updates occur on each page.  Then the page splits begin and it all goes to hell trying to maintain the clustered index physical order.

=====================================================================================
Next I looked at the query plan.  The actual update looks fine.  There is obviously a whole bunch of other stuff going on though that is highly suspect.  You have three rules on the table which I don't have the details on.  What I can tell is that the part of the plan that uses a cursor almost certainly means there is a trigger on this table that is firing on updates; that is unless you ran a lot more code than just one Update statement.  That trigger is probably how the columns like BLOCKED_DATE and BLOCKED_BY are being maintained given that you are not doing it in your original UPDATE statement.

Opening a cursor in a trigger is really bad practice.  It will absolutely kill performance and 99 times out of 100, there is absolutely no reason for a cursor at all.  This has all the feel of something an Oracle person would do instead of writing a set-based trigger in order to make up for the fact that Sybase fires triggers once per verb and Oracle has the option to fire once per row.

Next, it looks as if there is either dynamic SQL or a stored procedure (or xp_server) call of some kind in the middle since there is an EXECUTE in the middle of the cursor loop.  Super bad juju.

======================================================================================

So here is the deal.  Drop the IDX_AMF_PK index.  Next, alter the table to Data Pages Only Locking (DOL).  You may also want to increase the free space settings before you do your next reorg.  What this will do is reduce the overhead required to maintain the clustered index.  The database will no longer try to maintain absolute physical order of the rows but rather keeps the rows on the pages and blocks of pages as ordered as it can without killing performance.  It is a good compromise in most situations.

Then, post the trigger code (along with anything else it might be calling) so we can see what is going on there.  The trigger is most likely the other block of lead in this balloon that needs to be cleaned up.

BTW....
You can do bulk inserts to Sybase a couple of different ways.  You can use the BCP utility which takes a flat file and does streaming loads, or you can use the client library bulk load interface.  I am pretty sure the Sybase ODBC drivers support the bulk interface but I am not sure about the JDBC drivers you are using.  If the are the Data Direct drivers, you can set the BatchPerformanceWorkaround connection parameter to TRUE and you can do bulk inserts to Sybase that way.

Regards,
Bill
0
 

Author Comment

by:farhanitman
ID: 23731996
what is the command for altering table to Data Pages Only Locking (DOL) ?
and how to increase the free space settings?
I am attching the trigger code

CREATE TRIGGER TRG_AMF    
	ON AMF FOR INSERT  /*AFTER INSERT*/    
--Modified by syseng to add integration part   
--VERSION 1.0.1   
AS   
   
BEGIN   
	IF @@ROWCOUNT = 0    
		RETURN   
   
	DECLARE @current_comp_code NUMERIC(4,0)   
	DECLARE @current_cif_sub_no NUMERIC(8,0)   
	DECLARE @current_branch_code NUMERIC(4,0)   
	DECLARE @current_long_name_eng VARCHAR(40)   
	DECLARE @current_gl_code NUMERIC(6,0)   
	DECLARE @current_brief_name_eng VARCHAR(20)   
	DECLARE @current_sl_no NUMERIC(3,0)   
	DECLARE @current_currency_code NUMERIC(3,0)   
	DECLARE @current_maturity_dte DATETIME   
	DECLARE @current_gmi_flag VARCHAR(1)   
	DECLARE @previous_maturity_dte DATETIME   
	DECLARE @previous_cpm_dr_trans NUMERIC(12,0)   
	DECLARE @previous_min_stmt_bal NUMERIC(20,3)   
	DECLARE @previous_new_gl NUMERIC(6,0)   
	DECLARE @previous_min_dep NUMERIC(20,6)   
	DECLARE @previous_auth_sign VARCHAR(60)   
	DECLARE @previous_overlap_cv_val NUMERIC(18,3)   
	DECLARE @previous_last_act_usr VARCHAR(8)   
	DECLARE @previous_lpm_cv_move NUMERIC(18,3)   
	DECLARE @previous_dept NUMERIC(3,0)   
	DECLARE @previous_olm_cr_trans NUMERIC(12,0)   
	DECLARE @previous_gmi_flag VARCHAR(1)   
	DECLARE @previous_long_name_eng VARCHAR(40)   
	DECLARE @previous_lpm_fc_move NUMERIC(18,3)   
	DECLARE @previous_date_closd DATETIME   
	DECLARE @previous_bf_cv_val NUMERIC(18,3)   
	DECLARE @previous_mail_stmt VARCHAR(1)   
	DECLARE @previous_fl_ind VARCHAR(1)   
	DECLARE @previous_point_accrual_to_mat VARCHAR(1)   
	DECLARE @previous_acc_cv_points NUMERIC(18,3)   
	DECLARE @previous_ytd_dr_trans NUMERIC(12,0)   
	DECLARE @previous_obp_cv_val NUMERIC(18,3)   
	DECLARE @previous_blocked_cv NUMERIC(20,3)   
	DECLARE @previous_actual_deposit NUMERIC(20,3)   
	DECLARE @previous_overlap_fc_val NUMERIC(18,3)   
	DECLARE @previous_last_cv_points NUMERIC(18,3)   
	DECLARE @previous_ext_trf VARCHAR(1)   
	DECLARE @previous_cr_pt_index VARCHAR(1)   
	DECLARE @previous_dr_pt_index VARCHAR(1)   
	DECLARE @previous_trf_gl NUMERIC(6,0)   
	DECLARE @previous_initial_deposit_dte DATETIME   
	DECLARE @previous_new_ind VARCHAR(1)   
	DECLARE @previous_overlap_cv_bal NUMERIC(18,3)   
	DECLARE @previous_trf_sl NUMERIC(3,0)   
	DECLARE @previous_fl_amt NUMERIC(14,3)   
	DECLARE @previous_bf_fc_val NUMERIC(18,3)   
	DECLARE @previous_modarib_rate NUMERIC(9,6)   
	DECLARE @previous_lpm_cr_trans NUMERIC(12,0)   
	DECLARE @previous_down_grd_dte DATETIME   
	DECLARE @previous_dr_appl_date DATETIME   
	DECLARE @previous_obp_fc_val NUMERIC(18,3)   
	DECLARE @previous_date_opnd DATETIME   
	DECLARE @previous_profit_calc_group NUMERIC(4,0)   
	DECLARE @previous_old_status VARCHAR(1)   
	DECLARE @previous_origin_br NUMERIC(4,0)   
	DECLARE @previous_update_passbook VARCHAR(1)   
	DECLARE @previous_deposit_dte DATETIME   
	DECLARE @previous_bf_cv_bal NUMERIC(18,3)   
	DECLARE @previous_off_gl NUMERIC(6,0)   
	DECLARE @previous_acc_no VARCHAR(15)   
	DECLARE @previous_acc_bc VARCHAR(1)   
	DECLARE @previous_obp_cv_bal NUMERIC(18,3)   
	DECLARE @previous_origin VARCHAR(1)   
	DECLARE @previous_ldc_cr_trans NUMERIC(12,0)   
	DECLARE @previous_off_sl NUMERIC(3,0)   
	DECLARE @previous_auth_rej_usr VARCHAR(8)   
	DECLARE @previous_lpm_period_no NUMERIC(2,0)   
	DECLARE @previous_pt_method_days VARCHAR(1)   
	DECLARE @previous_overlap_fc_bal NUMERIC(18,3)   
	DECLARE @previous_acc_fc_points NUMERIC(18,3)   
	DECLARE @previous_add_date5 DATETIME   
	DECLARE @previous_cpm_cv_move NUMERIC(18,3)   
	DECLARE @previous_add_date4 DATETIME   
	DECLARE @previous_add_date3 DATETIME   
	DECLARE @previous_withd_per NUMERIC(20,6)   
	DECLARE @previous_add_date2 DATETIME   
	DECLARE @previous_add_date1 DATETIME   
	DECLARE @previous_bf_fc_bal NUMERIC(18,3)   
	DECLARE @previous_entered_by VARCHAR(8)   
	DECLARE @previous_currency_code NUMERIC(3,0)   
	DECLARE @previous_last_fc_points NUMERIC(18,3)   
	DECLARE @previous_trs_no NUMERIC(12,0)   
	DECLARE @previous_chq_srl NUMERIC(8,0)   
	DECLARE @previous_cpm_fc_move NUMERIC(18,3)   
	DECLARE @previous_obp_fc_bal NUMERIC(18,3)   
	DECLARE @previous_pt_ind VARCHAR(1)   
	DECLARE @previous_status VARCHAR(1)   
	DECLARE @previous_closed_by VARCHAR(8)   
	DECLARE @previous_last_date_calc DATETIME   
	DECLARE @previous_olm_dr_trans NUMERIC(12,0)   
	DECLARE @previous_renew VARCHAR(1)   
	DECLARE @previous_print_cr_advice VARCHAR(1)   
	DECLARE @previous_fc_avail_bal NUMERIC(18,3)   
	DECLARE @previous_reinstated_by VARCHAR(10)   
	DECLARE @previous_trf_cy NUMERIC(3,0)   
	DECLARE @previous_last_trs_date DATETIME   
	DECLARE @previous_additional_reference VARCHAR(35)   
	DECLARE @previous_deleted_by VARCHAR(8)   
	DECLARE @previous_account_number VARCHAR(34)   
	DECLARE @previous_lpm_dr_trans NUMERIC(12,0)   
	DECLARE @previous_date_reinstated DATETIME   
	DECLARE @previous_date_modified DATETIME   
	DECLARE @previous_gl_code NUMERIC(6,0)   
	DECLARE @previous_ytd_cv_bal NUMERIC(18,3)   
	DECLARE @previous_off_cy NUMERIC(3,0)   
	DECLARE @previous_deposit_rate NUMERIC(10,6)   
	DECLARE @previous_pft_gl NUMERIC(6,0)   
	DECLARE @previous_bf_cr_trans NUMERIC(12,0)   
	DECLARE @previous_ac_sign VARCHAR(1)   
	DECLARE @previous_ldc_dr_trans NUMERIC(12,0)   
	DECLARE @previous_pft_sl NUMERIC(3,0)   
	DECLARE @previous_pt_period VARCHAR(1)   
	DECLARE @previous_sl_no NUMERIC(3,0)   
	DECLARE @previous_cpm_cr_trans NUMERIC(12,0)   
	DECLARE @previous_closed_ind VARCHAR(1)   
	DECLARE @previous_trf_cif NUMERIC(8,0)   
	DECLARE @previous_last_act_dte DATETIME   
	DECLARE @previous_off_cif NUMERIC(8,0)   
	DECLARE @previous_lock_status VARCHAR(1)   
	DECLARE @previous_comm_fees NUMERIC(7,3)   
	DECLARE @previous_olm_cv_bal NUMERIC(18,3)   
	DECLARE @previous_ytd_fc_bal NUMERIC(18,3)   
	DECLARE @previous_pft_post_to VARCHAR(1)   
	DECLARE @previous_pft_cif NUMERIC(8,0)   
	DECLARE @previous_date_suspended DATETIME   
	DECLARE @previous_suspended_by VARCHAR(10)   
	DECLARE @previous_deposit_equiv NUMERIC(15,3)   
	DECLARE @previous_eff_clsd_dte DATETIME   
	DECLARE @previous_cv_avail_bal NUMERIC(18,3)   
	DECLARE @previous_brief_name_arab VARCHAR(20)   
	DECLARE @previous_ldc_cv_bal NUMERIC(18,3)   
	DECLARE @previous_modified_by VARCHAR(8)   
	DECLARE @previous_date_deleted DATETIME   
	DECLARE @previous_pt_method VARCHAR(1)   
	DECLARE @previous_branch_code NUMERIC(4,0)   
	DECLARE @previous_date_entered DATETIME   
	DECLARE @previous_print_dr_advice VARCHAR(1)   
	DECLARE @previous_comp_code NUMERIC(4,0)   
	DECLARE @previous_cr_pt_rate NUMERIC(11,6)   
	DECLARE @previous_olm_fc_bal NUMERIC(18,3)   
	DECLARE @previous_add_number5 NUMERIC(20,3)   
	DECLARE @previous_mail_advise VARCHAR(1)   
	DECLARE @previous_add_number4 NUMERIC(20,3)   
	DECLARE @previous_add_number3 NUMERIC(20,3)   
	DECLARE @previous_ldc_date DATETIME   
	DECLARE @previous_add_number2 NUMERIC(20,3)   
	DECLARE @previous_add_number1 NUMERIC(20,3)   
	DECLARE @previous_brief_name_eng VARCHAR(20)   
	DECLARE @previous_ytd_cr_trans NUMERIC(12,0)   
	DECLARE @previous_cr_appl_date DATETIME   
	DECLARE @previous_div NUMERIC(3,0)   
	DECLARE @previous_add_string5 VARCHAR(15)   
	DECLARE @previous_add_string4 VARCHAR(15)   
	DECLARE @previous_add_string3 VARCHAR(15)   
	DECLARE @previous_add_string2 VARCHAR(15)   
	DECLARE @previous_bf_dr_trans NUMERIC(12,0)   
	DECLARE @previous_add_string1 VARCHAR(15)   
	DECLARE @previous_stmt_period VARCHAR(1)   
	DECLARE @previous_long_name_arab VARCHAR(40)   
	DECLARE @previous_ldc_fc_bal NUMERIC(18,3)   
	DECLARE @previous_risk_prov NUMERIC(9,6)   
	DECLARE @previous_dr_pt_rate NUMERIC(11,6)   
	DECLARE @previous_maturity_gl NUMERIC(6,0)   
	DECLARE @previous_from_account VARCHAR(34)   
	DECLARE @previous_ac_srl NUMERIC(12,0)   
	DECLARE @previous_ss_acc_no VARCHAR(6)   
	DECLARE @previous_cif_sub_no NUMERIC(8,0)   
	DECLARE @previous_auth_rej_dte DATETIME   
	DECLARE @previous_first_trans_date DATETIME   
	DECLARE @previous_blocked_fc NUMERIC(20,3)   
	DECLARE @previous_last_trans_date DATETIME   
	DECLARE @previous_obm_close_dte DATETIME   
	DECLARE @previous_remarks VARCHAR(200)   
       
    DECLARE @lv_mode VARCHAR(1), @lv_inserted numeric(18)   
	declare @lv_deleted numeric(18), @ls_key varchar(250)   
	DECLARE @lv_error_code numeric(4), @ls_error_desc varchar(250)   
	DECLARE @ls_message	VARCHAR(4000)      
	DECLARE @ll_origin_br_i numeric(4)   
   
        DECLARE FOR_EACH_ROW_CURSOR CURSOR FOR    
        SELECT  GMI_FLAG,    
                COMP_CODE,    
                CIF_SUB_NO,    
                CURRENCY_CODE,    
                GL_CODE,    
                BRANCH_CODE,    
                SL_NO,    
                BRIEF_NAME_ENG,    
                LONG_NAME_ENG,    
                MATURITY_DTE,   
                CASE WHEN ORIGIN_BR_I IS NULL THEN 0 ELSE ORIGIN_BR_I END   
        FROM inserted   
   
   
	OPEN FOR_EACH_ROW_CURSOR    
	FETCH FOR_EACH_ROW_CURSOR INTO @current_gmi_flag, @current_comp_code, @current_cif_sub_no, @current_currency_code, @current_gl_code, @current_branch_code, @current_sl_no, @current_brief_name_eng, @current_long_name_eng, @current_maturity_dte, @ll_origin_br_i   
	   
         WHILE (@@sqlstatus <> 2)   
   
	BEGIN   
	   
   
		   
	 IF @ll_origin_br_i = 0 AND @current_gmi_flag = 'G'   
		BEGIN--integration part: push IM request to Q   
			--prepare the xml message and send it to the IM Q   
			SELECT @lv_mode = 'I'   
			SELECT @ls_message =  '<AMF_PUSH>'  +                                              
			'<COMP_CODE type= ''integer''>'+convert(varchar(4),@current_comp_code) +'</COMP_CODE>' +      
			'<BRANCH_CODE type= ''integer''>'+convert(varchar(4),@current_branch_code) +'</BRANCH_CODE>' +      
			'<CURRENCY_CODE type= ''integer''>'+convert(varchar(3),@current_currency_code) +'</CURRENCY_CODE>' +     
			'<GL_CODE type= ''integer''>'+convert(varchar(6),@current_gl_code) +'</GL_CODE>' +                                          
			'<CIF_NO type=''integer''>'+ convert(varchar(8),@current_cif_sub_no)  + '</CIF_NO>' +     
			'<SL_NO type=''integer''>'+ convert(varchar(3),@current_sl_no)  + '</SL_NO>' +       
			'<IUD_FLAG type=''string''>'+ @lv_mode + '</IUD_FLAG>' +                                                                                             
			'</AMF_PUSH>'     
			SELECT @ls_key = convert(varchar(4),@current_comp_code) + '-' + convert(varchar(4),@current_branch_code) + '-' + convert(varchar(3),@current_currency_code)+ '-' + convert(varchar(6),@current_gl_code)+ '-' + convert(varchar(8),@current_cif_sub_no)+ '-' + convert(varchar(3),@current_sl_no)   
			EXEC P_SYNC_IM_REQUEST 103, @ls_message, '', @ls_key, @lv_error_code , @ls_error_desc   
			   
  
		END--end integration part   
   
   
	DECLARE @w_trx_recordid                           VARCHAR(40)    
	DECLARE @w_trx_info                               VARCHAR(3245)    
	DECLARE @ls_short_name_eng                        VARCHAR(20)    
	DECLARE @ls_long_name_eng                         VARCHAR(40)    
	DECLARE @ls_address1_eng                          VARCHAR(30)    
	DECLARE @ls_address2_eng                          VARCHAR(30)    
	DECLARE @ls_address3_eng                          VARCHAR(30)    
	DECLARE @ls_tel                                   VARCHAR(20)    
	DECLARE @ls_fax                                   VARCHAR(20)    
	DECLARE @ls_telex                                 VARCHAR(20)    
	DECLARE @ll_legal_status                          NUMERIC(3)    
	DECLARE @ll_relation_code                         NUMERIC(3)    
	DECLARE @ll_country                               NUMERIC(3)    
	DECLARE @ls_resident                              VARCHAR(1)    
	DECLARE @ls_type                                  VARCHAR(1)    
	DECLARE @ls_id_no                                 VARCHAR(20)    
	DECLARE @ls_add_string1                           VARCHAR(15)    
	DECLARE @ls_additional_reference                  VARCHAR(20)    
	DECLARE @trx_seq_nextval                          NUMERIC(10)   
	DECLARE @ls_update_categ                          VARCHAR(100)     
   
/*   
       -- check for UPDATE trigger Ation (IF UPDATED)    
       ---------------------------------------------------------------------------------------------------------------------------------    
       IF (SELECT COUNT(*) FROM inserted) > 0 AND (SELECT COUNT(*) FROM deleted) > 0    
       BEGIN   
          TO DO :)   
       END    
       ---------------------------------------------------------------------------------------------------------------------------------    
*/              
   
	/* This Procedure Affect the table TRX_LOG in case we Insert    a record related to the TABLE AMF */   
	   
	IF (SELECT COUNT(*) FROM inserted) > 0 AND (SELECT COUNT(*) FROM deleted) = 0    
	BEGIN    
   
        -- Changed by Liza Request on 11/04/2005   
        ---Begin Changes---------------------------------------------------------------------------   
        SELECT  @ls_update_categ = ISNULL(UPDATE_CIF_CATEG,'0')              
           FROM  CIFCONTROL    
           WHERE COMP_CODE = @current_comp_code   
       IF @@ROWCOUNT = 0   
        BEGIN   
             
         SELECT @ls_update_categ = '0'   
   
        END       
   
       --UPDATE CIF CATEGORY    
       IF  @ls_update_categ = '1'    
        BEGIN   
          UPDATE   CIF SET PC_IND = 'C'   
            WHERE  COMP_CODE = @current_comp_code   
	      AND   CIF_NO    = @current_cif_sub_no   
	END   
      ----END OF Changes--------------------------------------------------------------------------   
   
   
		IF @current_gmi_flag <> 'I'    
		BEGIN    
			SELECT   
					 @ls_short_name_eng  =  CIF.SHORT_NAME_ENG,   
					 @ls_long_name_eng  =  CIF.LONG_NAME_ENG,   
					 @ls_address1_eng  =  CIF.ADDRESS1_ENG,   
					 @ls_address2_eng  =  CIF.ADDRESS2_ENG,   
					 @ls_address3_eng  =  CIF.ADDRESS3_ENG,   
					 @ls_tel  =  CIF.TEL,   
					 @ls_fax  =  CIF.FAX,   
					 @ls_telex  =  CIF.TELEX,   
					 @ll_legal_status  =  CIF.LEGAL_STATUS,   
					 @ll_relation_code  =  CIF.RELATION_CODE,   
					 @ll_country  =  CIF.COUNTRY,   
					 @ls_resident  =  CIF.RESIDENT,   
					 @ls_type  =  CIF.TYPE,   
					 @ls_id_no  =  CIF.ID_NO,   
					 @ls_add_string1  =  CIF.ADD_STRING1,   
					 @ls_additional_reference  =  CIF.ADDITIONAL_REFERENCE   
			FROM  CIF,   
				 inserted    
			WHERE	 CIF.COMP_CODE  = inserted.COMP_CODE   
			 AND	CIF.CIF_NO  = inserted.CIF_SUB_NO   
			   
			SELECT @w_trx_recordid  = LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@current_comp_code + 10000),2,4))))+ CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@current_cif_sub_no + 100000000),2,8))))+ CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@current_currency_code + 1000),2,3))))+ CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@current_gl_code + 1000000),2,6))))+ CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@current_branch_code + 10000),2,4))))+ CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@current_sl_no + 1000),2,3))))   
			SELECT @w_trx_info      = '//' + LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@current_comp_code + 10000),2,4))))+ CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@current_branch_code + 10000),2,4))))+ CHAR(254)+ @ls_short_name_eng + CHAR(254)+ @ls_long_name_eng + CHAR(254)+ @ls_address1_eng + CHAR(254)+ @ls_address2_eng + CHAR(254)+ @ls_address3_eng + CHAR(254)+ @ls_tel + CHAR(254)+ @ls_fax + CHAR(254)+ @ls_telex + CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@ll_legal_status + 1000),2,3))))+ CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@ll_relation_code + 1000),2,3))))+ CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@ll_country + 1000),2,3))))+ CHAR(254)+ @ls_resident + CHAR(254)+ @ls_type + CHAR(254)+ @ls_id_no + CHAR(254)+ @ls_add_string1 + CHAR(254)+ @ls_additional_reference + CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@current_gl_code + 1000000),2,6))))+ CHAR(254)+ LTRIM(RTRIM(CONVERT(VARCHAR, SUBSTRING(CONVERT(VARCHAR,@current_cif_sub_no + 100000000),2,8))))+ CHAR(254)+ @current_brief_name_eng + CHAR(254)+ @current_long_name_eng + CHAR(254)+ CONVERT(VARCHAR, @current_maturity_dte,103)+ '//'    
			   
			UPDATE SEQUENCES    
			SET LAST_NUMBER = LAST_NUMBER + INCREMENT    
			WHERE SEQUENCE_NAME = 'TRX_SEQ'   
   
			SELECT     @trx_seq_nextval =  LAST_NUMBER   
			FROM SEQUENCES    
			WHERE SEQUENCE_NAME =  'TRX_SEQ'   
   
			INSERT INTO  TRX_LOG       
					( IM_ENTITYCODE ,    
					TRX_RECORDID ,    
					TRX_TIME ,    
					TRX_TYPE ,    
					TRX_STATUS ,    
					TRX_INFO ,    
					TRX_CODE )     
			 VALUES 		( 2 ,    
					@w_trx_recordid ,    
					GETDATE() ,    
					'c' ,    
					's' ,    
					@w_trx_info ,    
					@trx_seq_nextval )     
			   
		END   
      
	END   
      
		FETCH FOR_EACH_ROW_CURSOR INTO @current_gmi_flag, @current_comp_code, @current_cif_sub_no, @current_currency_code, @current_gl_code, @current_branch_code, @current_sl_no, @current_brief_name_eng, @current_long_name_eng, @current_maturity_dte, @ll_origin_br_i   
	END   
   
	CLOSE FOR_EACH_ROW_CURSOR   
	DEALLOCATE CURSOR FOR_EACH_ROW_CURSOR   
		   
END   

Open in new window

0
 
LVL 19

Expert Comment

by:grant300
ID: 23736167
ALTER TABLE AMF LOCK DATAPAGES

Now for the trigger....

This whole thing reminds me of every plumbing project I have ever done that started with a leaky faucet.  The problems invariable wind up cascading all the way back to the hot water heater and the water meter.

If we start at the bottom, the design of the TRX_LOG table is hosed.  Using a separate SEQUENCES table in Sybase means that whomever designed/wrote this thing is an Oracle person who does not know Sybase well at all.  The correct way is to use an identity column.

With an identity column to assign the TRX_CODE, all of the crappy SQL around getting and setting the TRX_SEQ_NEXTVAL goes away.

At that point, the entire TRX_LOG maintenance step can be done with a single INSERT/SELECT and the cursor and while loop go away.

This is a perfect, textbook example of how a seemingly minor but excruciatingly poor design choice has cascaded to create a godawful mess in the implementation.  I am willing to bet this pattern is repeated throughout the system.

The next part of the trigger is the section that executes the P_SYNC_IM_REQUEST "routine".  I have no idea what P_SYNC_IM_REQUEST is.  It could be a Java routine, it could be an XP_SERVER routine, or it could be stored procedure.

In any event, the trigger was really the wrong place to do record at a time processing and it is a cardinal sin to call any kind of external process (SPs are O.K. but not things that go outside the database server itself) in the critical path of a SQL operation.  The correct way to structure this would be to create a queue table and stuff the necessary data into it from the trigger.  Again, this would be done with a single INSERT/SELECT.  It is then the responsibility of some other process to wait for new records in the queue and process them one at a time.

Next, this should really be two triggers, one for INSERT and one for UPDATE.  After you take all the cursor processing and the 60 or so totally unused declaration statements, you just aren't left with that much code.

The SQL within in the trigger is kind of nieve as well.  Doing a SELECT COUNT(*) on both inserted and deleted in order to determine if this was an update operation is not very efficient.  The point is mute if you split the trigger into two, anyway but if not, you just want to check for existence, e.g. IF EXISTS (SELECT * FROM inserted) AND EXISTS (SELECT * FROM deleted).  This way it does not have to table scan both pseudo tables in order to determine the operation.

Your best hope for a simple fix is to clean up some of the index non-sense and try using DOL.  Give that a shot and let us know what you come up with.

Regards,
Bill
0
 

Author Comment

by:farhanitman
ID: 23753080
I ran ALTER TABLE AMF LOCK DATAPAGES
then  i did reorg rebuild on AMF
and then tried my batch updates
its taking alot more time on the first 600 batch update statements
0
 

Author Comment

by:farhanitman
ID: 23753187
Attaching query plan and sp_help on AMF

sp-help-AMF.txt
query-plan-amf.txt
0
 
LVL 19

Expert Comment

by:grant300
ID: 23757166
Well, I have no idea what query you executed to get the query plan.

In addition, the problem appears to be the SELECT statement in the plan that is doing a table scan.  It is doing a "forward scan" "position at beginning of table" which means that it is doing a brute force read of the table, probably for each row you are updating.

Then to make matters worse, it refers to line 82 which is in the middle of that long string of useless DECLARE statements, assuming the problem is in the trigger so I have no faith that the trigger code you included earlier is actually the code being executed.

In any event, the optimizer did not bother to switch indexes when you deleted the non-clustered one.  You have to recompile the trigger in order to get it to reoptimize the query plans.  If that fails, you can always hint the index if need be.

Also, what is the memory configuration of your server and how large is the default data cache?  In fact, how big (rows and megabytes) is this main table and the transaction table you are dealing with?

Regards,
Bill


0
 

Author Comment

by:farhanitman
ID: 23761806
the query i executed is the same update statement which i executed earlier.
i recompiled the trigger also
Microsoft Windows Server 2003 R2
Enterprise x64Edition SP1
16GB RAM
0
 
LVL 19

Expert Comment

by:grant300
ID: 23764080
Maybe I missed it but I did not see a free-hand update statement typed into an interactive query tool.  I assume you are doing a single update using the key fields with specific values.  If that is not the case, show us what you typed in.

I am a bit curious as to how you are running the query and capturing the query plan.  One issue is that I don't see the section for the original Update.  The next is, as I said before, the line numbers don't match with the trigger code you posted.

What kind of tool(s) are you using?  At the very least, you should have Sybase Central available.  You could double check the trigger code by pulling it up there.

As far as memory goes, you have 16GB of RAM on the Machine.  We have not seen any of the memory configuration parameters for the Database itself.

Attach the Sybase config file and we can take a look at it.  You would be surprised how many sites are running with the defaults, which are very low, and manage to get by on them.

For the buffer cache configuration, fire up Sybase Central and navigate to the cache  folder.  There you will find the configured buffer cache(s).  Snap a window shot <alt><prtscrn> so we can check that out as well.

Is the DBA who created the database and triggers still around?  Does he know how to tune Sybase?   It is usually pretty tough for the application developer to do this stuff without at least some help from the DBA.

Regards,
Bill


0
 

Author Comment

by:farhanitman
ID: 23782305
i executed the following update statement, and i executed in SQL Adbantage.

UPDATE AMF
SET BLOCKED_CV = 0, BLOCKED_FC = 0
WHERE COMP_CODE = 1
AND BRANCH_CODE = 1003
AND CURRENCY_CODE = 586
AND GL_CODE = 210202    
AND CIF_SUB_NO = 10059    
AND SL_NO = 0;


In sybase Central, there are 2 triggers on AMF which i am attaching as a file.
Where can i find the sybase config file?
DBA is not around unfortunately. We bought the system from some company.

TRG-AMF.txt
TRG-AMF-IM.txt
snapshot.JPG
0
 
LVL 19

Expert Comment

by:grant300
ID: 23785124
It looks as if the default data cache is set to 9GB so that is plenty large enough.  Configuration is probably not the primary issue here.  Based on the 9GB, this is a 64 bit version of Sybase.  What O/S is it running on.

The table scan in the query plan at "line 82" turns out to be line 82 of the Update trigger, which makes sense now that we have the right code.  It appears the join was designed to handle inserts as well as updates or, alternatively, someone thought that the primary key value(s) might change.  Neither of those make much sense.

That join would be more efficient if it was coded not as a left outer join but as an equijoin between the deleted and the actual table.  The table has the "after image" in it and will match the contents you get in the inserted table but has the advantage of being indexed.

Line 82 is the cursor declaration and is a join between the Inserted and Deleted pseudo tables.  The problem with that design is that neither of those tables is indexed.  That is O.K. if you are doing one record at a time but gets to be pretty bad as the number of records processed increases.

The triggers are (poorly) written to handle operations that affect multiple rows.  It is unfortunate but what you have here is a project, not a quick question.  There is so much going on here and so much rewrite that needs to be done to get this work half way decently that I don't know how to tell you to even start.

That's not true, I do.  You need a decent database IDE with a debugger and profiling tool.  At one end of the scale you have Embarcadero RapidSQL Pro for about $1000 a seat.  At the other end you have Sybase Workspace which, for the time being, is free.  In the middle you have Aquafold Aqua Data Studio for about $400.  I use RapidSQL and Aqua Data all the time and have Workspace but have not installed it yet.  You really need to get one of these tools up an running so that you can get into the guts of this code.  A debugger with execution time profiling is priceless.for this kind of thing.

Another thing to try is to try making the primary key non-clustered.  Go back to all pages locking if you do that.

Finally, the only way to really get decent performance out of this pig without rewriting the triggers and the IM scheme altogether is to do a bulk load into a working table and then do the join that way.  The latencies and the round trips between the client and the server are making a bad situation much worse.  I don't know how much latitude you have to modify this system but you are going to have to make the trade-offs between twisting what you are trying to do to fit a bad system and/or working to improve the system overall.

All of this is really a shame.  Sybase is a very fast database and in the hands of someone who knows what they are doing, it can do some amazing things.  Unfortunately, the organization that created this mess put what was undoubtedly a second string Oracle person on this job.  Not only did they make a bunch of the classic "jamming Oracle down Sybase's throat" mistakes, any number of the design decisions were just plain bad practice, regardless of platform.

Sorry there is not an easy answer to this.  For all we know, it is the IM routine that is getting backed up.  Can you post that procedure as well?
0
 

Author Comment

by:farhanitman
ID: 23791914
ill try to batch insert in a temporary table and then use a single update to solve my problem.
Should i disable the triggers on AMF first and then do the updates? and then enable them again.
0
 

Author Comment

by:farhanitman
ID: 23792123
i have created a new temporary table and will use the following single update statement.
CREATE TABLE AMF_BLOCK_SYNC_MK(
COMP_CODE numeric(4),
BRANCH_CODE numeric(4),
CURRENCY_CODE numeric(3),
GL_CODE numeric (6),
CIF_SUB_NO numeric(8),
SL_NO numeric(3),
BLOCKED_CV numeric(15),
BLOCKED_FC numeric(15),
CONSTRAINT PK_AMF_BLOCK_SYNC_MK PRIMARY KEY
  (
    COMP_CODE, BRANCH_CODE, CURRENCY_CODE, GL_CODE, CIF_SUB_NO, SL_NO
  )
  ENABLE
);

is the following update ok?

UPDATE AMF A
SET A.BLOCKED_CV = (SELECT BLOCKED_CV 
                    FROM AMF_BLOCK_SYNC_MK
                    WHERE COMP_CODE = A.COMP_CODE
                    AND BRANCH_CODE = A.BRANCH_CODE
                    AND CURRENCY_CODE = A.CURRENCY_CODE
                    AND GL_CODE	= A.GL_CODE
                    AND CIF_SUB_NO = A.CIF_SUB_NO
                    AND SL_NO = A.SL_NO ),
    A.BLOCKED_FC = (SELECT BLOCKED_FC
                    FROM AMF_BLOCK_SYNC_MK
                    WHERE COMP_CODE = A.COMP_CODE
                    AND BRANCH_CODE = A.BRANCH_CODE
                    AND CURRENCY_CODE = A.CURRENCY_CODE
                    AND GL_CODE	= A.GL_CODE
                    AND CIF_SUB_NO = A.CIF_SUB_NO
                    AND SL_NO = A.SL_NO );

Open in new window

0
 
LVL 19

Accepted Solution

by:
grant300 earned 200 total points
ID: 23795253
Gosh no!!!  Those are correlated subqueries and performance will be incredibly bad.  The optimzer MIGHT try to flatten that out into a JOIN but I suspect that it will join to the work table twice at best and in the worst case, it will do two queries for every row in the table.  Not good.

The correct syntax is a JOIN between the driving and target tables...  As you can see in the code snippet, it can work for any number of columns and there is one, clean join.  The work table is the driver since it will have many fewer rows in it.  I would remove the primary key as it will not be needed and, in fact, will only slow things down while you try to load it.

BTW, all the documentation is available for download as PDFs for free from sybooks.sybase.com so you can have a ready reference to the syntax of the commands, statements, and functions.

Also, I am curious about one thing.  You say this is ASE and the code you have posted certainly looks like ASE however, this command is terminated with a semi-colon; a decidedly non-ASE thing to do.

One other thought I had about configuration of the server.  You are using a good chunk of the RAM (9+ out of 16GB), which in Windows may invite paging.  It ain't too smart that way.  There are two Sybase configuration parameters you want to make certain are set.  The first grabs all of the memory at startup and the second locks the shared mamory segment into RAM so it won't be paged.  Both of those options should be set.  You can find them in the server configuration section of Sybase Central.  Just right mouse click on the server icon and configuration will be one of the options.

Regards,
Bill

UPDATE AMF
   SET A.BLOCKED_CV = W.BLOCKED_CV,
       A.BLOCKED_FC = W.BLOCKED_FC
  FROM AMF_BLOCK_SYNC_MK W
  JOIN AMF A
    ON W.COMP_CODE = A.COMP_CODE
   AND W.BRANCH_CODE = A.BRANCH_CODE
   AND W.CURRENCY_CODE = A.CURRENCY_CODE
   AND W.GL_CODE = A.GL_CODE
   AND W.CIF_SUB_NO = A.CIF_SUB_NO
   AND W.SL_NO = A.SL_NO

Open in new window

0
 

Author Closing Comment

by:farhanitman
ID: 31549543
Thanks
0

Featured Post

Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Introduction This article is the second of three articles that explain why and how the Experts Exchange QA Team does test automation for our web site. This article covers the basic installation and configuration of the test automation tools used by…
In this post we will learn how to connect and configure Android Device (Smartphone etc.) with Android Studio. After that we will run a simple Hello World Program.
Viewers will learn about arithmetic and Boolean expressions in Java and the logical operators used to create Boolean expressions. We will cover the symbols used for arithmetic expressions and define each logical operator and how to use them in Boole…
This tutorial covers a practical example of lazy loading technique and early loading technique in a Singleton Design Pattern.
Suggested Courses
Course of the Month15 days, 8 hours left to enroll

850 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question