Avatar of sword12
sword12
 asked on

oracel redo logs on Linux and SAP

Hi Experts

we have ERP system SAP running on red hat Linux 7  using oracle 12C

from time to time our file system which reserved for ora logs  get full so fast out of the sudden

we are not able to know the root cause which  

process or transaction or user  generate such amount of logs

do you know anyway or tool to know such info


thanks
LinuxSAPOracle Database

Avatar of undefined
Last Comment
sword12

8/22/2022 - Mon
Erman Arslan

Hi,

You mean redo logs/archived logs probably.
You can use Oracle Log Miner to mine those logs and see the transactions that are causing the issue.

ermanarslan.blogspot.com
sword12

ASKER
Hi

our problem that we have for oracle only one service account which SAP use

and a lot of users using SAP and sap use only one user

we need to know which action or transaction created a job which created big logs

any advice here


thanks you
ASKER CERTIFIED SOLUTION
Erman Arslan

THIS SOLUTION ONLY AVAILABLE TO MEMBERS.
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
GET A PERSONALIZED SOLUTION
Ask your own question & get feedback from real experts
Find out why thousands trust the EE community with their toughest problems.
johnsone

As suggested, LogMiner would be what you are looking for.  There are columns in V$LOGMNR_CONTENTS to determine the machine, os user, and transaction.

I highly recommend looking through a very small number of files with LogMiner at once.  The more files you load to LogMiner the slower it gets.  In the past, I found it was faster to actually query the logs one at a time.
All of life is about relationships, and EE has made a viirtual community a real community. It lifts everyone's boat
William Peck
Mark Geerlings

Yes, Oracle's LogMiner utility is the tool that you can use to look at the archived redo log files to determine which Oracle tables have had the most transactions in the time period you are looking at.  I agree with johnsone's recommendation that you should start with just one, or a very small number, of logs at first.

Queries of the LogMiner information are not particularly fast, especially if you try to query multiple log files at one time.  If you have a test system that has the same Oracle database software version as your production version, you can copy a few log files to the test system and run LogMiner there instead of on your primary system.  (To do this, you will need to create a LogMiner dictionary file on your  production system and copy this to your test system, unless your test system has an up-to-date clone of you production system.)  That way you avoid slowing down (or possibly crashing) your production system when you run LogMiner.  I haven't seen LogMiner cause crashes very often in Oracle11, but in earlier versions of Oracle I saw a number of instance crashes due to LogMiner.
johnsone

Interesting....  I have never had LogMiner crash an instance.  And I used it a lot when it first came out.  Haven't used it much lately though.

This almost sounds like looking through one log might be sufficient to find what you are looking for.  Sounds like one rogue transaction.  If you can find a log that is mostly filled with one connection, that is probably your culprit.

If I ever needed to go through more a large number of files, I scripted it to go through the files one at a time and extract the relevant information into a table that I could look at after it processed all the files.  It was a lot faster than trying to do multiple files at once.
Mark Geerlings

I also used that technique of copying some of the columns from LogMiner into work tables where it can be queried much faster.  But you do have to try some LogMiner queries directly first to find some of the data you are looking for.

I'm also attaching a "log_miner_sample.sql" file that contains the actual LogMiner commands and queries I used in SQL*Plus on our database server some years ago, when I used LogMiner to help with a problem we had at the time.  Note that the "dbms_logmnr.ADD_LOGFILE…" lines contain actual file paths and file names from our server at the time.  You will have to supply the actual file paths and file names from your system when you run "dbms_logmnr.ADD_LOGFILE…" commands.
log_miner_sample.sql
⚡ FREE TRIAL OFFER
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.
Ora_Techie

How many redo logs you have and of what size? Do you have any job that regularly rebuilds (or  coalesce) the indexes?
johnsone

Why would you be regularly rebuilding indexes?  Normally not necessary in Oracle.  The only reason would be because of deleting a significant percentage of the table.
Ora_Techie

Why would you be regularly rebuilding indexes?

Yes, this shouldn't be the regular practice; that is why I asked author if they are doing it on regular basis because It will also generate logs.
This is the best money I have ever spent. I cannot not tell you how many times these folks have saved my bacon. I learn so much from the contributors.
rwheeler23
sword12

ASKER
the root cause was sap netweaver gateway