[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 262
  • Last Modified:

Database re-indexing - best practices advice sought

I have a 15GB database which has a daily maintenance plan to optimise the tables by setting their fill factors back to 90%.

The problem is that it creates a huge amount of database transactions. The next transaction log backup after the reindex comes to 11GB!

What is the best way to run reindexing? Should I write a script to put the database into simple recovery model and then run the reindex? But if I did that, how would that affect the continuity of any potential restore operation I may need to run? Because I would be switching the database back and forth between simple and bulk-logged recovery model.

First question: What is the best way to make reindexing and transaction log backups co-exist without the reindexing operation blowing out the transaction log size?

Second question: Should I even be running a reindex every day or would once a week suffice? I'm thinking daily is best because the database is a major production system which is hit by many users every day.
0
meumax
Asked:
meumax
1 Solution
 
VeluNCommented:
Rebuilding Index online is not a issue from Oracle9i use ONLINE option with ALTER INDEX REBUILD.

SQL> ALTER INDEX IDX_DEPT_NO REBUILD NOLOGGING PARALLEL 16 ONLINE;

This will allow you rebuild indexes without distrubing the usage of IDX_DEPT_NO and adding NOLOGGING option will avoide from generating redo logs. PARALLEL option will enable parallel executution.

Rebuilding index daily is a painful job. better you can use

SQL> ALTER INDEX IDX_DEPT_NO VALIDATE STRUCTURE:

Now you can verify the index LMODE from index_stats dictionary view and decide for rebuilding indexes.


Rgds,

Velu N
0
 
meumaxAuthor Commented:
Oops. Should have mentioned which rdbms I'm using. It's actually SQL Server 2000.
0
 
pettmansCommented:
I think you may be running your maintenance operation a little too frequently.

The need to do this operation depends on:
- frequency of index updates
- observered application performance

If your application users aren't reporting performance issues then you could consider dropping the frequency down to weekly (or even monthly), then setup some monitoring to see if performance is significantly impacted.

Consider dropping the fill factor down (say to 75% or 80%). That will increase the space used for indexes and increase the number of pages that may need to be traversed fro a query but will reduce the level of fragmentation that occurs - so can have less frequent index rebuilds.

Consider the use of DBCC INDEXDEFRAG as an alternative to ReIndex. It's an online operation, and is more efficient on lightly fragmented indexes. It will  take longer for a heavily fragmented index but
- no locks are held so user transactions are not blocked.
- the use of short transactions allows log file size to be minimised through use of frequent log backups during the operation or use of SIMPLE recovery model.

As both INDEXDEFRAG and REINDEX allow you to specify individual tables to target, you could mix and match use of these utilities based on your knowledge of application usage.

Regards,
Scott Pettman
0

Featured Post

Nothing ever in the clear!

This technical paper will help you implement VMware’s VM encryption as well as implement Veeam encryption which together will achieve the nothing ever in the clear goal. If a bad guy steals VMs, backups or traffic they get nothing.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now