schjo03
asked on
How can I quickly update/insert 5 or 6 million rows into a table.
I have tried the insert exception update method. That took an incredibly long time.
Next I tried a Merge "upsert". That took almost 5 hours as well.
I have a primary key on the target table and the source data is sorted by those key values.
I am breaking the rows into logical blocks of 50,000 and committing.
explain plan shows access by index rowid for both tables
I am out of ideas. Any suggestions welcomed.
Next I tried a Merge "upsert". That took almost 5 hours as well.
I have a primary key on the target table and the source data is sorted by those key values.
I am breaking the rows into logical blocks of 50,000 and committing.
explain plan shows access by index rowid for both tables
I am out of ideas. Any suggestions welcomed.
can you give your code script ?
Try using /*+append*/ hint for direct INSERT. This can reduce the logging. With 5,6 mil rows you could do this very easily. If you can, drop all the primary keys/constrains and recreate them after the load.
or, set the table in NoLogging mode to bypass the redo.
HTH.
~Sve
or, set the table in NoLogging mode to bypass the redo.
HTH.
~Sve
Usually Insert is much faster than Merge.
If your table is partitioned table, then you can exchange the partition.
For inserting, you can either use parallelism by using hint or at object level.
You can think of decreasing IO by setting the parameter DB_FILE_MULTIBLOCK_READ_CO UNT=0, lets oracle decide its best value during execution.
If you choose parallelism, then you can set PARALLEL_EXECUTION_MESSAGE _SIZE=16K.
>>explain plan shows access by index rowid for both tables
Using index doesnt mean that you will gain performance. Since you are loading 5 to 6 millions records, you need to find how much % of data you are inserting from your table.
Here you can try HASH join with PARALLEL hint.
Also You can think of tablespace with bigger extent size. therefore Oracle should not waste time in allocating extents to segment.
Is is possible to rename the object from where you are trying to insert.
Before coming with solution, we need to know insert statement, and source DB specification or table is present in same user/different user/ different database over diff network/ if firewall is there etc
If your table is partitioned table, then you can exchange the partition.
For inserting, you can either use parallelism by using hint or at object level.
You can think of decreasing IO by setting the parameter DB_FILE_MULTIBLOCK_READ_CO
If you choose parallelism, then you can set PARALLEL_EXECUTION_MESSAGE
>>explain plan shows access by index rowid for both tables
Using index doesnt mean that you will gain performance. Since you are loading 5 to 6 millions records, you need to find how much % of data you are inserting from your table.
Here you can try HASH join with PARALLEL hint.
Also You can think of tablespace with bigger extent size. therefore Oracle should not waste time in allocating extents to segment.
Is is possible to rename the object from where you are trying to insert.
Before coming with solution, we need to know insert statement, and source DB specification or table is present in same user/different user/ different database over diff network/ if firewall is there etc
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
sorry i took so long to respond
Here is a the load script I use
it does about 90% update 10% insert
CREATE OR REPLACE PROCEDURE LOAD_SALES
authid current_user
AS
CURSOR CMN1 IS
SELECT MIN_KEY
, MAX_KEY
, CT
, BUCKET
, PROCESSED
, ROWID
FROM BUCKET_TABLE
ORDER BY BUCKET DESC;
PROCEDURE load_history_month IS
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE BUCKET_TABLE PURGE';
EXECUTE IMMEDIATE '
CREATE TABLE
CREATE TABLE BUCKET_TABLE TABLESPACE TBS_SMALL AS
SELECT MIN(KEY_SK) AS MIN_KEY
,MAX(KEY_SK) AS MAX_KEY
,COUNT(*) CT
,BUCKET
FROM (SELECT KEY_SK, NTILE(75)
OVER(ORDER BY KEY_SK) BUCKET
FROM LOCAL_SLS
WHERE KEY_SK > 0)
GROUP BY BUCKET';
COMMIT;
BEGIN
FOR VAL IN CMN1 LOOP
MERGE INTO SALES_HISTORY
USING (SELECT * FROM LOCAL_SLS WHERE KEY_SK BETWEEN VAL.MIN_KEY AND VAL.MAX_KEY)
ON (SKU = SKU_NK
AND LOCATION = LOCATION_NK
AND RETAIL_YEAR_MONTH = DAY_SK)
WHEN MATCHED THEN
UPDATE
SET ON_HAND_QUANTITY = ON_HAND_QTY
,ON_HAND_COST = ON_HAND_DOLLARS
WHEN NOT MATCHED THEN
INSERT (
SKU
, LOCATION
, RETAIL_YEAR_MONTH
, SALES_QUANTITY
, SALES_COST
, SALES_VALUE
, RETURN_QUANTITY
, RETURN_COST
, RETURN_VALUE
, ON_HAND_QUANTITY
, ON_HAND_COST
)
VALUES
(
SKU_NK
, LOCATION_NK
, DAY_SK
, 0,0,0,0,0,0
, ON_HAND_QTY
, ON_HAND_DOLLARS
);
UPDATE BUCKET_TABLE
SET PROCESSED = 'TRUE'
WHERE ROWID = VAL.ROWID;
COMMIT;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Unex pected Error During Merge Process '||SQLERRM);
END;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Unex pected Error During Bucket Process '||SQLERRM);
END load_sales_history_month;
END LOAD_SALES;
Here is a the load script I use
it does about 90% update 10% insert
CREATE OR REPLACE PROCEDURE LOAD_SALES
authid current_user
AS
CURSOR CMN1 IS
SELECT MIN_KEY
, MAX_KEY
, CT
, BUCKET
, PROCESSED
, ROWID
FROM BUCKET_TABLE
ORDER BY BUCKET DESC;
PROCEDURE load_history_month IS
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE BUCKET_TABLE PURGE';
EXECUTE IMMEDIATE '
CREATE TABLE
CREATE TABLE BUCKET_TABLE TABLESPACE TBS_SMALL AS
SELECT MIN(KEY_SK) AS MIN_KEY
,MAX(KEY_SK) AS MAX_KEY
,COUNT(*) CT
,BUCKET
FROM (SELECT KEY_SK, NTILE(75)
OVER(ORDER BY KEY_SK) BUCKET
FROM LOCAL_SLS
WHERE KEY_SK > 0)
GROUP BY BUCKET';
COMMIT;
BEGIN
FOR VAL IN CMN1 LOOP
MERGE INTO SALES_HISTORY
USING (SELECT * FROM LOCAL_SLS WHERE KEY_SK BETWEEN VAL.MIN_KEY AND VAL.MAX_KEY)
ON (SKU = SKU_NK
AND LOCATION = LOCATION_NK
AND RETAIL_YEAR_MONTH = DAY_SK)
WHEN MATCHED THEN
UPDATE
SET ON_HAND_QUANTITY = ON_HAND_QTY
,ON_HAND_COST = ON_HAND_DOLLARS
WHEN NOT MATCHED THEN
INSERT (
SKU
, LOCATION
, RETAIL_YEAR_MONTH
, SALES_QUANTITY
, SALES_COST
, SALES_VALUE
, RETURN_QUANTITY
, RETURN_COST
, RETURN_VALUE
, ON_HAND_QUANTITY
, ON_HAND_COST
)
VALUES
(
SKU_NK
, LOCATION_NK
, DAY_SK
, 0,0,0,0,0,0
, ON_HAND_QTY
, ON_HAND_DOLLARS
);
UPDATE BUCKET_TABLE
SET PROCESSED = 'TRUE'
WHERE ROWID = VAL.ROWID;
COMMIT;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Unex
END;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Unex
END load_sales_history_month;
END LOAD_SALES;