Link to home
Start Free TrialLog in
Avatar of gs79
gs79

asked on

to run query when no query is running in the db

I have a plsql blick which reads and insert rows into a table..I want to make sure that when this block runs no other query is running in the database..

The query should run between 10p and 6 am and when it fires automatically it should check if there any otjher queries running and run only when no query is running.

Is there a way to do this?

here is the query

begin
insert into tab
select col1, col2 from
tab a, tabb, tab c
where <join conditions &filters>;
end;
/
Avatar of TJCombos
TJCombos
Flag of United States of America image

You can create a script to disable all users access at time range so you can perform your task.
- the following query will check for active session with query running at the time. you can modify further to suits your need:

SELECT a.sid, a.serial#, b.sql_text, a.username  
FROM v$session a, v$sqlarea b  
WHERE a.sql_address=b.address
you cant lock the database while executing query to that database...so you must need to lock the table where you are performng  the query

Simply use the follow SELECT statement to lock your tables so that, whoever connects to the database, would not get access to those tables:

Code:
 SELECT * FROM table_name WITH TABLOCKX

Have a look at this:

http://forums.techarena.in/software-development/1243154.htm

might helps you.
- can you share with us what are you trying to achieve? does the query involve all user tables on the database? if not then the earlier expert suggestion on locking the table that intended for your query is sufficient.

- you can also kill all user session which have active query running (see the query i posted earlier) using the sid and serial# as in the following link:
http://www.oracle-base.com/articles/misc/KillingOracleSessions.php


- then lock all user account before executing the query. you can query to dba_users view to get the list of all oracle user however be careful to select only application user not oracle system user:
ALTER USER username ACCOUNT LOCK    -- lock user account
ALTER USER username ACCOUNT UNLOCK; -- unlocks the locked users account
Avatar of Naveen Kumar
why is that this query should not run in parallel with others.. i had not really got to your requirement and hence please explain. may be there is a better way to do your requirement than what is being done.
Avatar of gs79
gs79

ASKER

Hello everyone,

Thank you for your responses. Here is a bit more explaination on what I am trying to do..

I am trying to read records from table and insert into a table. To do this CTAS is not viable option as dropping the table means we have to recreate the relations again.

And inserting is not one time. It is done everyday. Right now it is taking 4 hrs 30 min while I need to bring it down to less than half hour. It reads from a table with 300 million records and inserts around 20-30 million records..

To improve the performance I wanted to leverage oracle's parallel processing features/techniques. Hence I wanted to do this using parallel pipelined functions.

1. Listing 1 in the Code below shows the insert statement. Basically it passes record set into table function as input and gets the results to be inserted into target table.

insert /* append parallel(target_tab) into target_tab
select * from table(cursor(select * from src where <filter>))

2. Listing 2  shows the package body and specification which contains the parallel pipelined function. The pipelined function recieves the data set and bulk process the data parallely in parallel slave process and streams output rows as soon as it processes so that it can be  inserted parallelly.


So after executing listing 1, I notice that sometimes that records are inserted in parallel slaves which can be performed by issuing the following query soon after insert: You can see from the results that there was no parallelism. But sometimes I see 'DML/DDL parallelized' value to be 1.

        select * from v$pq_sesstat;

Queries Parallelized      0      2
DML Parallelized      0      0
DDL Parallelized      0      0
DFO Trees      0      2
Server Threads      0      0
Allocation Height      0      0
Allocation Width      0      0
Local Msgs Sent      0      5331
Distr Msgs Sent      0      0
Local Msgs Recv'd      0      5395
Distr Msgs Recv'd      0      0

The parallelism in action can be further confirmed/seen by introducing 3 audit columns in the target table. ie session_id, start_time and end_time.

select session_id,count(*), (max(end_time)-max(start_time))*24*60 as time_elapsed, max(start_time), max(end_time)
from DW_TICKETS_TRANS_IND_FACT_NEW2  group by session_id

SESS_ID    COunt(*)    Time_elapsed              start_time                                      end_time
5441      100000      0.666666666666667      11/7/2011 11:57:57 PM      11/7/2011 11:58:37 PM

When the query is processed parallely which is not happening right now at the time of  typing this, I can see 8 session ids with more less 8 equal break down of count indicating that records are processed parallely.

There are two issues now:

1. Parallelism is not happening frequently and its hard to observe the performance
2. Even when parallelism happens there is no significant performance gain. I once executed the complete load in parallel and found out that it took more or less same amount of time that is taken my existing method. I do not know if I am doing anything wrong. Is there any better way to improve the performance by any other methods? Please advise..

Thanks
--listing 1

insert /*+ append parallel(DW_TICKETS_TRANS_IND_FACT_NEW1) */
            into  DW_TICKETS_TRANS_IND_FACT_NEW1(LIST_START_DT_DW_ID,
                          LIST_END_DT_DW_ID,
                          SELLER_DW_ID,
                          EVENT_DW_ID,
                          GENRE_DW_ID,
                          GEOGRAPHY_DW_ID,
                          session_id,
                          start_time,
                          end_time) 
 select *  from 
                 table(LD_PKG1.tickets_txfm_fn(cursor( SELECT/*+ parallel(trans_fact) */ 
                          LIST_START_DT_DW_ID,
                          LIST_END_DT_DW_ID,
                          SELLER_DW_ID,
                          EVENT_DW_ID,
                          GENRE_DW_ID,
                          RANK() OVER (PARTITION BY SELLER_DW_ID ORDER BY genre_dw_id) AS  GEOGRAPHY_DW_ID                         
                        FROM  trans_fact
                        WHERE COBRAND_DW_ID not in  (5001,4753,4679,5020,5021,4605,4614,44662,44664,44669,44783,44780,4405,4407,24735))))


--Listing 2

CREATE OR REPLACE package OWBRUNTARGET_DW.LD_PKG1
as
  type DW_TICKETS_TRANS_IND_FACT_tbl1 is table of DW_TICKETS_TRANS_IND_FACT_TYP1 ;
 
/*type tickets_trans_ind_fact_rc is ref cursor
 return DW_TICKETS_TRANS_FACT%rowtype;*/
 function tickets_txfm_fn(p_source_data IN --tickets_trans_ind_fact_rc
                                    sys_refcursor)
 return DW_TICKETS_TRANS_IND_FACT_tbl1

 parallel_enable (partition p_source_data by any) pipelined;
 end LD_PKG1;
/

CREATE OR REPLACE PACKAGE BODY OWBRUNTARGET_DW.LD_PKG1
as
function tickets_txfm_fn(p_source_data IN  --tickets_trans_ind_fact_rc
                                             sys_refcursor)
 return DW_TICKETS_TRANS_IND_FACT_tbl1

 parallel_enable (partition p_source_data by any) pipelined 
 is
  TYPE l_source_data_typ is TABLE OF  dw_tickets_trans_fact_vw%ROWTYPE INDEX BY binary_integer;  
 --l_source_data_tbl  DW_TICKETS_TRANS_IND_FACT_NEW%ROWTYPE;
 l_source_data_tbl    l_source_data_typ;
 --TYPE l_source_data_typ is TABLE OF DW_TICKETS_TRANS_IND_FACT_TYP1%ROWTYPE INDEX BY binary_integer;  
 
 l_limit_size             number :=100;
  v_sid number;
  l_num NUMBER:=0;
  l_start_time date:=sysdate;
  l_end_time date;

 BEGIN
  select sid into v_sid from v$mystat where rownum = 1;
  loop

    FETCH p_source_data bulk collect  INTO   l_source_data_tbl limit 100;
    l_num:=l_num+l_source_data_tbl.COUNT;
    
    for i in 1..l_source_data_tbl.COUNT loop
    
    --dbms_lock.sleep(0.005);
    l_end_time:=sysdate;
    
        
     pipe row(DW_TICKETS_TRANS_IND_FACT_TYP1(   
       l_source_data_tbl(i).LIST_START_DT_DW_ID,
           l_source_data_tbl(i).LIST_END_DT_DW_ID,
          l_source_data_tbl(i).SELLER_DW_ID,
  l_source_data_tbl(i).EVENT_DW_ID,
  l_source_data_tbl(i).GENRE_DW_ID,
  l_source_data_tbl(i).GEOGRAPHY_DW_ID,
  v_sid,
  l_start_time,
  l_end_time                          
                         ));
     end loop;
   EXIT WHEN p_source_data%notfound;
end loop;
  CLOSE p_source_data;
 return;
end tickets_txfm_fn;
end LD_PKG1;
/

Open in new window

>>Even when parallelism happens there is no significant performance gain

Then is it work pursuing why Oracle's optimizer chooses to ignore the parallel hint?  Remember they are 'hints' and can be ignored.

>>Right now it is taking 4 hrs 30 min while I need to bring it down to less than half hour

Is this really a realistic goal?  The only way I would think this is possible is to upgrade your hardware.

When you run the process, what is the current limiting factor?  Is it CPU, disk, memory, combination, ???
Avatar of gs79

ASKER

4.30 hours is a typo in the previous related thread..I havent asked that here. Current process in place takes 4.30..THe goal is to reduce it to half hour or less using pipelined function..

I think i had a good test yesterday..the query ran in 8 slaves and finished in less than 45 min. While all the insertion was done in 35 minutes index creation took another 10 min. Quite a performance boost. The hard ware is very good we have 64 processors.

Now the challenge is to ensure that this program uses parallel slaves whenever it is run..It is pretty whimsical, some times it runs parallely and sometimes not and run in a single thread..
ASKER CERTIFIED SOLUTION
Avatar of slightwv (䄆 Netminder)
slightwv (䄆 Netminder)

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial