Solved

Sql optimization, avoid latch free wait event

Posted on 2006-06-11
8
1,431 Views
Last Modified: 2008-02-07
Hi Experts,

I have a  statement for optimization, it is related to a table about customers and their logins,lets say the table is like this:

customer_id login_time q_success ip
------------------------------------------
1  1/1/2006 15:20:21   0   192.168.17.11
1  1/2/2006 15:22:24   1   192.168.17.12
1  1/3/2006 15:30:26   0   192.168.17.13
1  1/1/2006 15:10:28   1   192.168.17.14

q_success , if login was successfull(1) or not(0)
and other columns not related to the problem I guess.

The problematic sql statment is(count the number of unsuccessfull login attempts after the last successfull login)
SELECT COUNT(1)
      FROM mb.log_login
 WHERE customer_id = 2345345
       AND q_success = 0
       AND login_time > (SELECT MAX(login_time)
                                      FROM mb.log_login
                               WHERE customer_id = 2345345
                              AND q_success = 1)

Is it possible to avoid the subquery, I receive a lot of latch free wait events on this statement(cache buffers chains),to use some analytic function or something else , the table is ~20 millions and has indexes on:

- first index on customer_id,login_time,q_success_ip
- ip,login_time
- login_time


How to avoid this latch free wait events, there are other similar sqls but on this one we have mainly the latch free waits

Thank You
0
Comment
Question by:adrian_ang
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 2
  • 2
  • +1
8 Comments
 
LVL 143

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 16880798
The following syntax should return the same results:

SELECT COUNT(1)
FROM mb.log_login s
JOIN mb.log_login f
  ON s.customer_id = f.customerid
 AND s.q_success = 1
 AND f.q_success = 0
 AND f.login_time > s.login_time
0
 
LVL 5

Author Comment

by:adrian_ang
ID: 16880831
Thanks angelIII ,

this avoids the subquery but is very slow , for the same customer
the statement with the subquery runs in 1.141 seconds
and yours a lot of time , after 10 seconds I stopped it

The exec plan of the first one is:

select statement
   index range scan on first index
     sort aggregate
       index range scan on first index

The exec plan of yours is:

select statement
   sort aggregate
    nested loops
       index range scan on first index
       index range scan on first index

and when I stopped it it did ~44000 logical reads
I run yours after the one with the subquery(it makes ~600 logical reads and finishes in 1.141).

So , in this case this is not acceptable, but thanks for the proposal.


0
 
LVL 143

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 16880863
I did not say that my sql avoiding subquery would be better/faster :-)

>the table is ~20 millions and has indexes on:
either you should think about partitionning the table, or/and use materialized views.
both should be able to make the query faster by reducing the number of blocks from the table and index to be looked up for this query.

however, the best solution would be to keep the number of failed logins in another table (customer_id, failed_logins), which would be adjusted on a insert trigger on your current table.
would not take alot of overhead on the insert, but reduce your query time to 0.1 seconds or even less.
0
Creating Instructional Tutorials  

For Any Use & On Any Platform

Contextual Guidance at the moment of need helps your employees/users adopt software o& achieve even the most complex tasks instantly. Boost knowledge retention, software adoption & employee engagement with easy solution.

 
LVL 14

Accepted Solution

by:
GGuzdziol earned 200 total points
ID: 16881805
Maybe this?

select count(*)
  from (
    select sum(q_success) over (order by login_time desc) x
      from mb.log_login
  )
  where x = 0;
0
 
LVL 16

Expert Comment

by:MohanKNair
ID: 16883237
Try this query

select customer_id, MIN(login_time), MAX(login_time), MAX(cnt_login) count_login from(
select customer_id,login_time,q_success,ip,
row_number() OVER(partition by customer_id,q_success order by customer_id,login_time) cnt_login
from log_login)
WHERE q_success=0
GROUP BY customer_id
/


0
 
LVL 5

Author Comment

by:adrian_ang
ID: 16883756
Hi all, thanks about the replays ,but the statment must count the number of failed login attempts after the last successfull one. Example:

123  1/1/2006 0 192.168.17.115
123  2/1/2006 1 192.168.17.114
123  3/1/2006 1 192.168.17.113
123  4/1/2006 0 192.168.17.112
123  5/1/2006 0 192.168.17.111
123  6/1/2006 0 192.168.17.110

 The statement must return 3


0
 
LVL 14

Expert Comment

by:GGuzdziol
ID: 16883825
I still claim my solution works as expected:

SQL> create table log_login(id number, login_time date, q_success number, ip varchar2(15));

Table created.

SQL> insert into log_login values (123, TO_DATE('01/01/2006', 'DD/MM/YYYY'), 0,'192.168.0.1');

1 row created.

SQL> insert into log_login values (123, TO_DATE('02/01/2006', 'DD/MM/YYYY'), 1,'192.168.0.1');

1 row created.

SQL> insert into log_login values (123, TO_DATE('03/01/2006', 'DD/MM/YYYY'), 1,'192.168.0.1');

1 row created.

SQL> insert into log_login values (123, TO_DATE('04/01/2006', 'DD/MM/YYYY'), 0,'192.168.0.1');

1 row created.

SQL> insert into log_login values (123, TO_DATE('05/01/2006', 'DD/MM/YYYY'), 0,'192.168.0.1');

1 row created.

SQL> insert into log_login values (123, TO_DATE('06/01/2006', 'DD/MM/YYYY'), 0,'192.168.0.1');

1 row created.

SQL> commit;

Commit complete.

SQL> ed
Wrote file afiedt.buf

  1  select count(*)
  2    from (
  3      select sum(q_success) over (order by login_time desc) x
  4        from log_login
  5    )
  6*   where x = 0
SQL> /

  COUNT(*)
----------
         3
0
 
LVL 5

Author Comment

by:adrian_ang
ID: 16883837
Thanks GGuzdziol ,
works well, have to upload and see if with only once index usage instead of twice will latch free waits be avoided.

0

Featured Post

Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Working with Network Access Control Lists in Oracle 11g (part 2) Part 1: http://www.e-e.com/A_8429.html Previously, I introduced the basics of network ACL's including how to create, delete and modify entries to allow and deny access.  For many…
This post first appeared at Oracleinaction  (http://oracleinaction.com/undo-and-redo-in-oracle/)by Anju Garg (Myself). I  will demonstrate that undo for DML’s is stored both in undo tablespace and online redo logs. Then, we will analyze the reaso…
This video shows information on the Oracle Data Dictionary, starting with the Oracle documentation, explaining the different types of Data Dictionary views available by group and permissions as well as giving examples on how to retrieve data from th…
This video shows how to Export data from an Oracle database using the Original Export Utility.  The corresponding Import utility, which works the same way is referenced, but not demonstrated.

730 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question