Environment is Oracle Enterprise Edition 10.2.0.3, Solaris 10.
Full cold backups of our 3TB data warehouse are done with RMAN when possible. Sometimes, that is only once each month, due to system maintenance by Unix Ops or other business priorities.
I do not have enough space to archivelog. We spit out 350G of logs (each 1.2M) every week. We have approximately 580 tablespaces and about 630 datafiles. (I inherited that!!). Most of our partitioned tables are in monthly partitions, and they are never archived off. Only our daily partitioned tables have a limited retention policy of 90 days.
Currently, our RMAN backup takes about 13 hours, and the backupset is stored on a storage array where no other production files exist. The compressed backupset uses about 600m.
Due to the fact that "the business" sometimes comes out of the woodwork and asks to update or modify huge partitioned tables (we have about 30 of them) that contain data going back as far as 1998, I have never been able to get a commitment to mark some of these tablespaces as read only. Now, I have been given a list of about 140 tablespaces that can be marked read only.
So, I plan to do a new "baseline" RMAN backup this Sunday, after which I want to mark the approved tablespaces as read only. Future RMAN backups should skip the read only tablespaces, so long as I retain the original baseline backup. Are there any circumstances where that would not be true? If our storage array went up in flames and we lost everything, how would I go about recovering using the baseline and the most recent backup that contains everything except the read only tablespaces?
Would I be better off creating a separate backup of only the tablespaces that are going to be marked as read only, then another backup AFTER they have been marked?
Most of the literature I have explored deals with shops that use archivelogging, and I have not discovered a recovery scenario that describes how I could recover the entire database in a case like ours.
Thanks in advance.