sqlldr issue

let me explain this....

I had running 1 session of sqlldr while I run indexes into table for 6 fields... now the indexes are completed but the sqlldr running very slow it load one file after each 30 minute...

now when I do any alter to set the indexes off it won't let me do it and though an error of ( ORA-000054 and ORA-000604) even if I run another sqlldr....

any suggestion will be highly appreciate.... bellow is sample of my control files header...

    OPTIONS(DIRECT=TRUE,ROWS=10000,BINDSIZE=209700000,readsize=209700000)
    load data 
    append
    into table name
    FIELDS TERMINATED by '!'
    OPTIONALLY ENCLOSED by '"'
    trailing nullcols

Open in new window



When I run following query

    select * from dba_dml_locks
    
    session_id,owner,name,mode_held,mode_requested,last_convert,blocking_others
    123,username,tablename,exclusive,none,1228,not blocking

Open in new window


ps: I update the question as I mixed up two question sorry guys
LVL 1
hi4pplAsked:
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

x
 
Naveen KumarConnect With a Mentor Production Manager / Application Support ManagerCommented:
<<<<< while i'm here I think of another solution I don't know that will work or not... what if I create one table as staging area load the data index free and another table with index created on them... once I load the data then insert it with sql statement into the indexed table... so my question is will this be the same time consuming? >>>>> ---> This will be of no help to you and you would end up with the same time while loading data from your staging to main table as your main table would still have those indexes and they needed to be updated/maintained along with the data uploads.

Thanks
0
 
slightwv (䄆 Netminder) Commented:
Drop the indexes, don't try to alter them.

You should wait until the table has finished loading before you create the indexes or deal with the delays in updating all the indexes while you are adding data to the tables.
0
 
Naveen KumarProduction Manager / Application Support ManagerCommented:
as already said above, let your loading get over and then you have to create indexes. As you may know if the indexes are already created before loading, then there will be overhead for the database to maintain the indexes.

Any reason why you are doing both parallely ?
0
The new generation of project management tools

With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.

 
hi4pplAuthor Commented:
Hi,

thanks for replay... well loading is the daily operation as I get around 8000+ files each day around 20GB and I have to load them everyday.... and index I have to speed up the queries so I have to have both at the same time...if I have to drop indexes and load the data everyday and then recreate them will create huge backlogs...

and if I wait for the load to finish that will take forever as there is 9000 files and I don't know till which file is loaded and maybe 6000 is loaded and if I wait for another 3000 to load it will take months as I mentioned earlier at this pace which is 1 file every half an hour 3000 files will be months....

any other suggestion?
0
 
Naveen KumarProduction Manager / Application Support ManagerCommented:
Oh.. do not think you have an alternate way around here.

may be if your tables are having constraints etc then that would add to further overhead. Disabling them before loading & enabling them after loading may help to speed up but you need to aware of the consequences etc by doing that.
0
 
hi4pplAuthor Commented:
Hi,

thanks for replay... what could be the consequences if I do that...removing/disabling the indexes ... I don't have any constraint ... and also this is my daily operation any other suggestion would be highly appreciate to avoid such things...
0
 
Naveen KumarProduction Manager / Application Support ManagerCommented:
ok.. if you do not have constraints, then you are left with indexes.

Why did it become slow out of a sudden ?
were any new indexes created recently after which you are seeing slowness ?
Did the data volumes increase ?
Does your table has a lot of DML ( Insert , update , delete ) happening everyday or only inserts happen for loading data ?

I am thinking that indexes should have been there from day one if nothing has changed recently.
0
 
hi4pplAuthor Commented:
Hi...

no the problem arises when I created index.... the indexes where not there... actually these data was in postgress before and I recently migrated them to oracle for more option and controll... I never had this issue in postgress even when index the loading was normal... but then I start loading them here in oracle and couple days was fine and things get this much slow when I created indexes...

now I removed the indexes and the loading is normal.... so since this is daily operation for me I at the end of the day I need the index to be present to speed up the queries... so if I do (alter index indexname unsable) is that the same as drop index indexname?

since it's my daily operation i'm thinking of automating the deleting index and load the data and then create index again... this will create timing issue that index will take... I have to see if index will take more time or loading or queries without index then decide... that is what it came in my mind... suggestion will be appreciate it...
0
 
slightwv (䄆 Netminder) Commented:
So, you are getting 20 Gig a day.

Is that being added to the table each day or are you clearing out the table before you load the new data?

In other words:  Is the table always 20 Gig, or will it grow by 20 gig every day and eventually be Terabytes in size?
0
 
hi4pplAuthor Commented:
Hi... the table keep grow that 20G is addition to the table...

table will be renewed every month... so for each month it will be different table... and this table is partitioned by date so to 28-31 partition based on the month like for feb there is 28+1 partition for max... the tablespace for table and index are different from each other and they are in different disk I did this to optimize the performance ....
0
 
slightwv (䄆 Netminder) Commented:
If the table continues to grow, then marking the index unusable with a rebuild might be better.  Of course you will need to test this to make sure.

You need to alter the indexes before the load starts and rebuild after the load completes.
0
 
hi4pplAuthor Commented:
Hi thanks... but is "alter index indexname unsable" is the same as "drop index indexnam"
in terms of:

- rebuilding vs re-creating
- taking time

and I still wonder how other people with terabyte of data will deal with this... are they doing the same practice of doping or altering index every time they load?

thanks
0
 
slightwv (䄆 Netminder) Commented:
>>but is "alter index indexname unsable" is the same as "drop index indexnam" in terms of:

From what I've read, making it unusable and a rebuild can take advantage of some of what is already there.

Dropping it causes a complete rebuild.

You will need to test to see which one is actually better for your specific situation.

I think a lot would have to do with the type of index:  Is it global or local to the partition?  You might just be able to work with the individual partition index if it local but I've not done a lot with partitioning.

>>are they doing the same practice of doping or altering index every time they load?

I don't know.  I've not been around that many DSS databases to have first hand knowledge.
0
 
Naveen KumarProduction Manager / Application Support ManagerCommented:
but is "alter index indexname unsable" is the same as "drop index indexnam" ?

As said above, these are different slightly with respect to the time they take and I too believe in your case - you can first try to make it unusable and then rebuild it to see if that helps.

dropping and then recreating may take longer in theory but if the unusable/rebuild does not help - give it a go to test the drop/recreate as well so you get a fair idea of what it takes and would could be the best to suit your requirements.

Thanks,
0
 
hi4pplAuthor Commented:
Hi all, I don't know how to distribute the points here as all give valuable information...
0
 
slightwv (䄆 Netminder) Commented:
Select "Accept Multiple Solutions" and you can assign points to the specific posts you wish to award.
0
 
hi4pplAuthor Commented:
okay so here is my update...

I try with doping and re-creating the indexes:

I have 6 indexes each with over  the table of 400Million+  record still growing everyday ....

- drooping them takes 30 seconds
- creating back takes 4 hours

 
with rebuild indexes
- setting index unusable takes 40 second
- rebuild index after load done will take 6 hours

so with all these result I assume I have go with dropping and re-creating index...

while i'm here I think of another solution I don't know that will work or not... what if I create one table as staging area load the data index free and another table with index created on them... once I load the data then insert it with sql statement into the indexed table... so my question is will this be the same time consuming?

regards
0
 
slightwv (䄆 Netminder)Connect With a Mentor Commented:
I don't see what problem loading into a staging table first will solve.

It is the index on the main table that is slowing things down.  Eventually the data will have to be loaded in the main table and the indexes updated.

Have you looked into local or partitioned indexes?  As I posted above, I've not done a lot with partitioning but know a little.  If you create local indexes on the partitions then you only need to rebuild the index on the partitions that changed.
0
All Courses

From novice to tech pro — start learning today.