?
Solved

Spooling stopping around 25G from scheduler; running process manually spools entire contents

Posted on 2008-10-09
2
Medium Priority
?
620 Views
Last Modified: 2013-12-18
When using a scheduler in production, the shell script called is spooling to a file.  The file stops midline around 25Gb; it is not always stopping on the same record or point within the record, so it is not a data issue.  Calling the shell script manually, the entire contents of the spool successfully (around 200Gb). Using the same scheduler job in QA, the entire contents of the spool successfully (around 150Gb).  

In production, the scheduler calling the same shell script, with different paramters and extracting smaller dataset,  spools successfully.

The output from the sqlplus session is logged in a log file.  All of the records are logged (the 200Gb sized set). At the end of the log the following error appears:
         SP2-0308: cannot close spool file
Here is the version of SQL*Plus (it is the same in QA and Production):
         SQL*Plus: Release 10.2.0.3.0

The scheduler and the user when running the job manually is the same.

Any ideas are appreciated. Thanks.


0
Comment
Question by:HangingCurve
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
2 Comments
 
LVL 35

Expert Comment

by:johnsone
ID: 22679610
From the Oracle manual, here is the description of that error:

SP2-0308 Cannot close spool file

Cause: The file is currently being used.

Action: Release the file from the other process.


Is it possible the scheduled job kicks off twice, or kicks off a second time before the first one is finished?
0
 

Accepted Solution

by:
HangingCurve earned 0 total points
ID: 22686558
Thanks Sage, I was thinking the same thing...but upon further review...
Well, of course after submitting this, we had an epiphany at the end of the day yesterday.
We determined that the scheduler is running the script from another directory. We manually ran the script from that directory.  It turns out that from that directory it is spooling to file system that is different from where the shell scripts are.  That filesystem didn't have enough memory, so the spooling hit the limit and stopped.
The sys admin has increased the size and the spooling extracts all of the records successfully when running the script manually from the directory the scheduler is running from.
0

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Working with Network Access Control Lists in Oracle 11g (part 2) Part 1: http://www.e-e.com/A_8429.html Previously, I introduced the basics of network ACL's including how to create, delete and modify entries to allow and deny access.  For many…
Configuring and using Oracle Database Gateway for ODBC Introduction First, a brief summary of what a Database Gateway is.  A Gateway is a set of driver agents and configurations that allow an Oracle database to communicate with other platforms…
This video shows setup options and the basic steps and syntax for duplicating (cloning) a database from one instance to another. Examples are given for duplicating to the same machine and to different machines
This video shows information on the Oracle Data Dictionary, starting with the Oracle documentation, explaining the different types of Data Dictionary views available by group and permissions as well as giving examples on how to retrieve data from th…
Suggested Courses

752 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question