Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1446
  • Last Modified:

Sql optimization, avoid latch free wait event

Hi Experts,

I have a  statement for optimization, it is related to a table about customers and their logins,lets say the table is like this:

customer_id login_time q_success ip
------------------------------------------
1  1/1/2006 15:20:21   0   192.168.17.11
1  1/2/2006 15:22:24   1   192.168.17.12
1  1/3/2006 15:30:26   0   192.168.17.13
1  1/1/2006 15:10:28   1   192.168.17.14

q_success , if login was successfull(1) or not(0)
and other columns not related to the problem I guess.

The problematic sql statment is(count the number of unsuccessfull login attempts after the last successfull login)
SELECT COUNT(1)
      FROM mb.log_login
 WHERE customer_id = 2345345
       AND q_success = 0
       AND login_time > (SELECT MAX(login_time)
                                      FROM mb.log_login
                               WHERE customer_id = 2345345
                              AND q_success = 1)

Is it possible to avoid the subquery, I receive a lot of latch free wait events on this statement(cache buffers chains),to use some analytic function or something else , the table is ~20 millions and has indexes on:

- first index on customer_id,login_time,q_success_ip
- ip,login_time
- login_time


How to avoid this latch free wait events, there are other similar sqls but on this one we have mainly the latch free waits

Thank You
0
adrian_ang
Asked:
adrian_ang
  • 3
  • 2
  • 2
  • +1
1 Solution
 
Guy Hengel [angelIII / a3]Billing EngineerCommented:
The following syntax should return the same results:

SELECT COUNT(1)
FROM mb.log_login s
JOIN mb.log_login f
  ON s.customer_id = f.customerid
 AND s.q_success = 1
 AND f.q_success = 0
 AND f.login_time > s.login_time
0
 
adrian_angAuthor Commented:
Thanks angelIII ,

this avoids the subquery but is very slow , for the same customer
the statement with the subquery runs in 1.141 seconds
and yours a lot of time , after 10 seconds I stopped it

The exec plan of the first one is:

select statement
   index range scan on first index
     sort aggregate
       index range scan on first index

The exec plan of yours is:

select statement
   sort aggregate
    nested loops
       index range scan on first index
       index range scan on first index

and when I stopped it it did ~44000 logical reads
I run yours after the one with the subquery(it makes ~600 logical reads and finishes in 1.141).

So , in this case this is not acceptable, but thanks for the proposal.


0
 
Guy Hengel [angelIII / a3]Billing EngineerCommented:
I did not say that my sql avoiding subquery would be better/faster :-)

>the table is ~20 millions and has indexes on:
either you should think about partitionning the table, or/and use materialized views.
both should be able to make the query faster by reducing the number of blocks from the table and index to be looked up for this query.

however, the best solution would be to keep the number of failed logins in another table (customer_id, failed_logins), which would be adjusted on a insert trigger on your current table.
would not take alot of overhead on the insert, but reduce your query time to 0.1 seconds or even less.
0
What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

 
GGuzdziolCommented:
Maybe this?

select count(*)
  from (
    select sum(q_success) over (order by login_time desc) x
      from mb.log_login
  )
  where x = 0;
0
 
MohanKNairCommented:
Try this query

select customer_id, MIN(login_time), MAX(login_time), MAX(cnt_login) count_login from(
select customer_id,login_time,q_success,ip,
row_number() OVER(partition by customer_id,q_success order by customer_id,login_time) cnt_login
from log_login)
WHERE q_success=0
GROUP BY customer_id
/


0
 
adrian_angAuthor Commented:
Hi all, thanks about the replays ,but the statment must count the number of failed login attempts after the last successfull one. Example:

123  1/1/2006 0 192.168.17.115
123  2/1/2006 1 192.168.17.114
123  3/1/2006 1 192.168.17.113
123  4/1/2006 0 192.168.17.112
123  5/1/2006 0 192.168.17.111
123  6/1/2006 0 192.168.17.110

 The statement must return 3


0
 
GGuzdziolCommented:
I still claim my solution works as expected:

SQL> create table log_login(id number, login_time date, q_success number, ip varchar2(15));

Table created.

SQL> insert into log_login values (123, TO_DATE('01/01/2006', 'DD/MM/YYYY'), 0,'192.168.0.1');

1 row created.

SQL> insert into log_login values (123, TO_DATE('02/01/2006', 'DD/MM/YYYY'), 1,'192.168.0.1');

1 row created.

SQL> insert into log_login values (123, TO_DATE('03/01/2006', 'DD/MM/YYYY'), 1,'192.168.0.1');

1 row created.

SQL> insert into log_login values (123, TO_DATE('04/01/2006', 'DD/MM/YYYY'), 0,'192.168.0.1');

1 row created.

SQL> insert into log_login values (123, TO_DATE('05/01/2006', 'DD/MM/YYYY'), 0,'192.168.0.1');

1 row created.

SQL> insert into log_login values (123, TO_DATE('06/01/2006', 'DD/MM/YYYY'), 0,'192.168.0.1');

1 row created.

SQL> commit;

Commit complete.

SQL> ed
Wrote file afiedt.buf

  1  select count(*)
  2    from (
  3      select sum(q_success) over (order by login_time desc) x
  4        from log_login
  5    )
  6*   where x = 0
SQL> /

  COUNT(*)
----------
         3
0
 
adrian_angAuthor Commented:
Thanks GGuzdziol ,
works well, have to upload and see if with only once index usage instead of twice will latch free waits be avoided.

0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

  • 3
  • 2
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now