sunhux
asked on
Scripts / commands to detect failed DB login attempts and querying of unusual large amount of data
Without DB Activity Monitoring in place, we would like daily job/script
(Windows Scheduled Task for MS SQL and Unix cron for Oracle) to
detect if the following took place :
a) failed login attempts to a DB (even a single failed attempt)
b) unusual high amount of data being queried (say 10 times more
than last 30 days' average; if this is not possible, then 15MB of
data)
Q1:
Can provide the specific commands (SQL or OS) for both DB types
above for Windows & UNIX (Solaris & RHEL)?
Q2:
For item a above, which file in MS SQL & Oracle DB contain that info?
As the scripts run once/day, Ideally the script covers the last 24hrs
events/logs so that in the next run
(Windows Scheduled Task for MS SQL and Unix cron for Oracle) to
detect if the following took place :
a) failed login attempts to a DB (even a single failed attempt)
b) unusual high amount of data being queried (say 10 times more
than last 30 days' average; if this is not possible, then 15MB of
data)
Q1:
Can provide the specific commands (SQL or OS) for both DB types
above for Windows & UNIX (Solaris & RHEL)?
Q2:
For item a above, which file in MS SQL & Oracle DB contain that info?
As the scripts run once/day, Ideally the script covers the last 24hrs
events/logs so that in the next run
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Thanks for the script Johnsone; how do we cut off the data queried on daily basis?
Possible to add a criteria to query for last 24 hrs, thanks
Possible to add a criteria to query for last 24 hrs, thanks
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Thanks; on sudden large amt of data being queried, how do we detect this?
Ok, just as what our DBA replied:
"we will not only monitor 1017 error code. Better to capture all login error scenarios
as regulator wanted it.
Btw, turning on audit for table level will eat a lot of space and resource" <==to detect large amt of data queried?
Ok, just as what our DBA replied:
"we will not only monitor 1017 error code. Better to capture all login error scenarios
as regulator wanted it.
Btw, turning on audit for table level will eat a lot of space and resource" <==to detect large amt of data queried?
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
This condition will be very hard to detect: "unusual high amount of data being queried". Yes, you can set up one or more custom resource profiles (your database will already have at least one default profile) and assign them to the various Oracle users in your database. But, these do not track the amount of data being sent to a client. These only track the number of blocks of data that a query reads in the database server, regardless of how many of those rows actually get returned to the client.
ASKER
> set up a resource profile to stop people from running queries that do too many reads
Can elaborate a example profile that stops queries of more than 30MB data if it's not
possible to restrict by the number of rows (which I was looking at 300,000 rows)?
Can elaborate a example profile that stops queries of more than 30MB data if it's not
possible to restrict by the number of rows (which I was looking at 300,000 rows)?
ASKER
If it doesn't block, perhaps an alert /detection that a query that returns more than 30MB
(or 300,000 rows) have been run (possibly within the last 1 hour as plan to detect this on
hourly basis)
(or 300,000 rows) have been run (possibly within the last 1 hour as plan to detect this on
hourly basis)
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Q3:
Referring to above URL, what's the equivalent Oracle command to set
' audit_trail="true" ' and is there any way to do it without restarting the
Oracle instance & is it necessary to enable Audit Trail to get this info of
failed login attempts?
Q4:
Our DBA said turning on the Audit Trail can have major performance impact:
what can we do about this? How do we estimate the disk space the Audit
Trail will take?
Q5:
What's the command to roll over (ie once/day) a new Audit Trail file once the
sql query to get the failed login attempts is done