Solved

Ora-01144 File Size exceeds maximum of 4194303 Blocks

Posted on 2010-11-17
5
11,447 Views
Last Modified: 2012-05-10
We have an Oracle DB where our main MRP system runs (IFS).
Currently on Oracle 9i – 9.2.0.8.0.

Our main Tablespace IFSAPP_DATA is at 30gb when we noticed that it was slowing down.  So looking at the table space under the Oracle Enterprise Managment consle, we noticed that it was a 100%.

So, at first we attempted to change the DB size from 30GB to 40GB and got the error "Ora-01144 File Size (5376000 Blocks) exceeds maximum of 4194303 Blocks"

We have now changed the DB to 32gb and it is chuggin along nicely, but with 7 - 10 days or so, we'll be at the same stage again (i think).

Attached are the threee screen grabs showing (a) the General tab (b) Storage tab (c) error message from the console in a PDF file.

Hope someone can help Document1.pdf
0
Comment
Question by:thegiantsmurf
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
5 Comments
 
LVL 6

Expert Comment

by:anumoses
ID: 34155599
0
 
LVL 8

Accepted Solution

by:
Rindbaek earned 500 total points
ID: 34155678
With your current settings 32 G is probably the max size for the datafile
 
a quick work around would be to add an extra datafile to the tablespace.
Something like this (ensure that you have enough disk space for the max sizes)
alter tablespace IFSAPP_DATA add datafile 'G:\oradata\afsl\ifsapp_data02.dbf' SIZE 128K AUTOEXTEND ON NEXT 128K MAXSIZE 10G;

0
 

Author Comment

by:thegiantsmurf
ID: 34155768
Ok. reading more into this : http://forums.oracle.com/forums/thread.jspa?threadID=1083307
We have reached the limit of the DB running at 8192 bytes per block.
Can I just change the block size, or do I need to export then re-import at a different size
0
 
LVL 35

Expert Comment

by:johnsone
ID: 34155870
32G is not the limit of the entire database with an 8K block size.  That is the limit of a single data file.  As Rindbaek suggested, adding a data file to the tablespace should solve your problem.
0
 
LVL 48

Expert Comment

by:schwertner
ID: 34158357
Add many small files to that tablespace. Keeping one big file is very bad practice. So agree with all other Experts - they are entirelly correct. Big files also are fragmented in different extents that makes them slow. Also backup and restore are complex and dangerous. There is no benefit from keeping big files.
0

Featured Post

On Demand Webinar - Networking for the Cloud Era

This webinar discusses:
-Common barriers companies experience when moving to the cloud
-How SD-WAN changes the way we look at networks
-Best practices customers should employ moving forward with cloud migration
-What happens behind the scenes of SteelConnect’s one-click button

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

This post first appeared at Oracleinaction  (http://oracleinaction.com/undo-and-redo-in-oracle/)by Anju Garg (Myself). I  will demonstrate that undo for DML’s is stored both in undo tablespace and online redo logs. Then, we will analyze the reaso…
Checking the Alert Log in AWS RDS Oracle can be a pain through their user interface.  I made a script to download the Alert Log, look for errors, and email me the trace files.  In this article I'll describe what I did and share my script.
This video shows information on the Oracle Data Dictionary, starting with the Oracle documentation, explaining the different types of Data Dictionary views available by group and permissions as well as giving examples on how to retrieve data from th…
This video shows how to recover a database from a user managed backup

717 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question