Solved

Spooling stopping around 25G from scheduler; running process manually spools entire contents

Posted on 2008-10-09
2
614 Views
Last Modified: 2013-12-18
When using a scheduler in production, the shell script called is spooling to a file.  The file stops midline around 25Gb; it is not always stopping on the same record or point within the record, so it is not a data issue.  Calling the shell script manually, the entire contents of the spool successfully (around 200Gb). Using the same scheduler job in QA, the entire contents of the spool successfully (around 150Gb).  

In production, the scheduler calling the same shell script, with different paramters and extracting smaller dataset,  spools successfully.

The output from the sqlplus session is logged in a log file.  All of the records are logged (the 200Gb sized set). At the end of the log the following error appears:
         SP2-0308: cannot close spool file
Here is the version of SQL*Plus (it is the same in QA and Production):
         SQL*Plus: Release 10.2.0.3.0

The scheduler and the user when running the job manually is the same.

Any ideas are appreciated. Thanks.


0
Comment
Question by:HangingCurve
2 Comments
 
LVL 34

Expert Comment

by:johnsone
ID: 22679610
From the Oracle manual, here is the description of that error:

SP2-0308 Cannot close spool file

Cause: The file is currently being used.

Action: Release the file from the other process.


Is it possible the scheduled job kicks off twice, or kicks off a second time before the first one is finished?
0
 

Accepted Solution

by:
HangingCurve earned 0 total points
ID: 22686558
Thanks Sage, I was thinking the same thing...but upon further review...
Well, of course after submitting this, we had an epiphany at the end of the day yesterday.
We determined that the scheduler is running the script from another directory. We manually ran the script from that directory.  It turns out that from that directory it is spooling to file system that is different from where the shell scripts are.  That filesystem didn't have enough memory, so the spooling hit the limit and stopped.
The sys admin has increased the size and the spooling extracts all of the records successfully when running the script manually from the directory the scheduler is running from.
0

Featured Post

Gigs: Get Your Project Delivered by an Expert

Select from freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely and get projects done right.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Truncate is a DDL Command where as Delete is a DML Command. Both will delete data from table, but what is the difference between these below statements truncate table <table_name> ?? delete from <table_name> ?? The first command cannot be …
Introduction A previously published article on Experts Exchange ("Joins in Oracle", http://www.experts-exchange.com/Database/Oracle/A_8249-Joins-in-Oracle.html) makes a statement about "Oracle proprietary" joins and mixes the join syntax with gen…
This video shows how to copy a database user from one database to another user DBMS_METADATA.  It also shows how to copy a user's permissions and discusses password hash differences between Oracle 10g and 11g.
This videos aims to give the viewer a basic demonstration of how a user can query current session information by using the SYS_CONTEXT function

815 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now