Avatar of sunhux
sunhux
 asked on

Scripts / commands to detect failed DB login attempts and querying of unusual large amount of data

Without DB Activity Monitoring in place, we would like daily job/script
(Windows Scheduled Task for MS SQL  and  Unix cron for Oracle) to
detect if the following took place :

a) failed login attempts to a DB (even a single failed attempt)
b) unusual high amount of data being queried (say 10 times more
    than last 30 days' average;  if this is not possible, then 15MB of
    data)

Q1:
Can provide the specific commands (SQL or OS) for both DB types
above for Windows & UNIX (Solaris & RHEL)?

Q2:
For item a  above, which file in MS SQL & Oracle DB contain that info?

As the scripts run once/day, Ideally the script covers the last 24hrs
events/logs so that in the next run
DatabasesMicrosoft SQL ServerOracle DatabaseSQL

Avatar of undefined
Last Comment
johnsone

8/22/2022 - Mon
sunhux

ASKER
https://www.netwrix.com/how_to_track_down_failed_oracle_logon_attempts.html

Q3:
Referring to above URL, what's the equivalent Oracle command to set
' audit_trail="true" '  and is there any way to do it without restarting the
Oracle instance & is it necessary to enable Audit Trail to get this info of
failed login attempts?

Q4:
Our DBA said turning on the Audit Trail can have major performance impact:
what can we do about this?  How do we estimate the disk space the Audit
Trail will take?

Q5:
What's the command to roll over (ie once/day) a new Audit Trail file once the
 sql query to get the failed login attempts is done
SOLUTION
johnsone

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
GET A PERSONALIZED SOLUTION
Ask your own question & get feedback from real experts
Find out why thousands trust the EE community with their toughest problems.
SOLUTION
Mark Geerlings

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
⚡ FREE TRIAL OFFER
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.
ASKER CERTIFIED SOLUTION
johnsone

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
⚡ FREE TRIAL OFFER
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.
sunhux

ASKER
Thanks for the script Johnsone;  how do we cut off the data queried on daily basis?
Possible to add a criteria to query for last 24 hrs, thanks
SOLUTION
Mark Geerlings

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
⚡ FREE TRIAL OFFER
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.
sunhux

ASKER
Thanks;  on sudden large amt of data being queried, how do we detect this?

Ok, just as what our DBA replied:
"we will not only monitor 1017 error code.  Better to capture all login error scenarios
  as regulator wanted it.
  Btw, turning on audit for table level will eat a lot of space and resource" <==to detect large amt of data queried?
Your help has saved me hundreds of hours of internet surfing.
fblack61
SOLUTION
johnsone

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
⚡ FREE TRIAL OFFER
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.
Mark Geerlings

This condition will be very hard to detect: "unusual high amount of data being queried".  Yes, you can set up one or more custom resource profiles (your database will already have at least one default profile) and assign them to the various Oracle users in your database.  But, these do not track the amount of data being sent to a client.  These only track the number of blocks of data that a query reads in the database server, regardless of how many of those rows actually get returned to the client.
sunhux

ASKER
> set up a resource profile to stop people from running queries that do too many reads
Can elaborate a example profile that stops queries of more than 30MB data if it's not
possible to restrict by the number of rows (which  I was looking at  300,000 rows)?
sunhux

ASKER
If it doesn't block, perhaps an alert /detection that a query that returns more than 30MB
(or 300,000 rows) have been run (possibly within the last 1 hour as plan to detect this on
hourly basis)
⚡ FREE TRIAL OFFER
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.
SOLUTION
johnsone

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
⚡ FREE TRIAL OFFER
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.