How to reduce the size of tablespace files that are clearly not fully used?

I'm working with a test database on my laptop and I had to abort a couple of large inserts and index builds.  A few of my DBF files are very large.  In particular,

TEMP01.DBF - 8Gigs+
UNDOTBS01.DBF - 5.5Gigs
USERS01.DBF - 3.7Gigs

I don't have exactly which tables and indexes are in which tablespaces, but I know I'm not using that much space.  Is there any way to reduce the sizes of those files?

Thanks buckets
Who is Participating?
Naveen KumarConnect With a Mentor Production Manager / Application Support ManagerCommented:
I dont think you can reduce the size of the database tablespace files. Add a new tablespace and a data file with the space you want to allocate ( say 2 GB is enough for you ) and then drop these tablespaces from oracle database and then remove those files which belong to those tablespaces from the OS.

Just verify once all the objects present in the tablespace which you are going to drop because if you drop by mistake you cannot get it back.

select tablespace_name, segment_name, bytes/1024/1024 size_in_mb
from dba_segments
where tablespace_name in ('USERS01','TEMP01');

select * from dba_data_files
where tablespace_name in ('USERS01','TEMP01');  -- this is to see which files are for data tablespace which are used for storing database data

select * from dba_temp_files
where tablespace_name in ('USERS01','TEMP01'); -- this is to see which files are for TEMP tablespaces

If at all you dont want to drop existing tablespaces and create new ones, then identify unwanted tables/indexes which are sitting in the tablespaces which you wanted to cleanup and then drop them if tthey are not needed. This will get you free space without any addition of new space.

gvsbnarayanaConnect With a Mentor Commented:
  You can find out the exact space used by each of the table/index by querying dba_extents view. Sometimes, because of the way the index/table is fragmented accross the datafile, you may not be able to resize the datafile to the size of the segment.
The following query may show you the smallest data file size possible on for each of the datafile and you can re-size to the smallest size.
column value new_val blksize
select value from v$parameter where name = 'db_block_size'

select file_name,
       ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest,
       ceil( blocks*&&blksize/1024/1024) currsize,
       ceil( blocks*&&blksize/1024/1024) -
       ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
from dba_data_files a,
     ( select file_id, max(block_id+blocks-1) hwm
         from dba_extents
        group by file_id ) b
where a.file_id = b.file_id(+)

DalTXColtsFanAuthor Commented:
Thanks gvs this helped.

3 questions:

1.  My Undo tablespace (UNDOTBS010.dbf) is maxed out at about 5.4 Gigs (5330 blocks at 8 megs each block) and according to your query it's maxed out and can't be reduced.   Does that mean I'm completely stuck?  There's no setting I can adjust to make that file smaller?

2.  My TEMP01.DBF file did not even appear in that list, and it's about 8 Gigs all by itself.

3.  For the ones that your query reported *can* be reduced, how do I do it?  Is it a SQL statement?  A few of them had trimmable space.

Cloud Class® Course: Ruby Fundamentals

This course will introduce you to Ruby, as well as teach you about classes, methods, variables, data structures, loops, enumerable methods, and finishing touches.

DalTXColtsFanAuthor Commented:
Don't know if this helps, but I ran the query

select tablespace_name, segment_name, bytes/1024/1024 size_in_mb
from dba_segments order by tablespace_name

And saw that there was one segment in the tablespace UNDOTBS1 that is over 5 Gigs all by itself:  The segment name is _SYSSMU3$.  The SEGMENT_TYPE is TYPE2 UNDO.

No other segment associated with that tablespace is even ten percent as big.

This mean anything?
harry_hendrataConnect With a Mentor Commented:
you may want to refer to this link on how to reduce datafile; however if the datafile is fragmented, you may need export the table, and drop the table and resize the datafile, then import back the data.

DalTXColtsFanAuthor Commented:
Just to update on the progress I’ve made, I found out that at least for the file TEMP01.DBF I can do this:

ALTER TABLESPACE temp ADD TEMPFILE '/oradata/temp03.dbf' SIZE 100M;

It seems, and someone please correct me if I’m wrong, that nothing is actually “stored” in the TEMP tablespace.  It’s basically an extra paging file used by Oracle for HASH joins, sorts, and whatever it needs extra memory for.  Also correct me if I’m wrong but it seems like a higher SORT_AREA_SIZE initialization parameter value will decrease the use of this temp tablespace and increase query speed (hence the reason for having a ton of RAM on the database server).

So this one has been resized, but will probably grow right back to the size it was next time I try to join a 24,000,000 row table to a 22,000 row table.  As a matter of fact, I probably hosed myself because the next time I run a query that big it will probably be slower because Oracle has to increase the size of this file before it can even RUN the bloody thing!

The tablespace USERS had a table with 46,000,000 rows in it that I have truncated, so now there’s about 3 Gigs of space in it that I’m not using.  The code behind this link (thanks Harry):  Had the answer.

I ran the query ALTER DATABASE DATAFILE 'C:\oracle\product\10.2.0\oradata\ikb20063\USERS01.DBF' RESIZE 60M; and it WORKED.  Again, I probably hosed myself because when I go to re-insert all of that data it’ll be slower, but hey, I’ll take a hosing in the name of a learning experience!

The tablespace UNDOTBS1 has 15 segments, and one of these segments is over 5 Gigs all by itself.  None of the others are anywhere NEAR that big.  I *would* like to know how to reduce the size of this one if possible because I’ve learned (the HARD way!) to, when the business logic permits, do a commit every 10,000 or so rows because if you try to add or update 46,000,000 rows in one fell swoop the whole kit and caboodle gets written to the undo tablespace which exponentially increases execution time.  I’m sure that the one 5-Gig segment is from when I tried to create 46,000,000 rows without doing a commit until the end.  I aborted the process before it finished too.

I'm going to go ahead and close this question, but if anyone happens to know how to delete the ONE segment in the UNDO tablespace, I'd appreciate the info!
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.