TSM Scheduled Backup configuration and hints

ok, our old TSM server 5.5 has this scheduled configuration:

tsm: dom_TSM>q sched

Domain           *     Schedule Name        Action     Start Date/Time          Duration     Period     Day
------------     -     ----------------     ------     --------------------     --------     ------     ---
domD                  dom5_DES_BACKUP     Inc Bk     09/03/13   17:00:00          1 H                 (*)
domD                  domDES_BACKUP       Inc Bk     08/29/13   06:30:00          2 H                 (*)
domD                  MIDDLEDES_BACKUP     Inc Bk     09/03/13   14:15:00          2 H                 (*)
domD                  MIDDLEPRE_BACKUP     Inc Bk     09/15/11   11:15:00          2 H                 (*)
domP                  dom5CONT_BACKUP     Inc Bk     09/26/12   15:00:00          1 H                 (*)
domP                  dom5_BACKUP         Inc Bk     06/08/11   17:40:24          2 H                 (*)
domP                  domNIM_BACKUP       Inc Bk     01/24/11   12:00:00          2 H                 (*)
domP                  domNTP1_BACKUP      Inc Bk     09/03/13   06:00:00          1 H                 (*)
domP                  domPROD2_BACKUP     Inc Bk     06/17/11   12:35:00          2 H                 (*)
domP                  domPROD_BACKUP      Inc Bk     06/17/11   17:50:00          2 H                 (*)
domP                  domREPO_BACKUP      Inc Bk     01/24/11   15:30:00          2 H                 (*)
domP                  LPAR2RRD_BACKUP      Inc Bk     09/02/13   11:10:00          2 H                 (*)
domP                  MIDDLECONT_BACK-UP     Inc Bk     02/02/11   16:00:00          2 H                 (*)                       
domP                  MIDDLEP1_BACKUP      Inc Bk     01/24/11   16:45:00          2 H                 (*)
domP                  MIDDLEP2_BACKUP      Inc Bk     01/24/11   15:00:00          2 H                 (*)
domP                  OTRS_BACKUP          Inc Bk     02/21/13   14:30:00          1 H                 (*)
domP                  VIOS43_1_BACKUP      Inc Bk     09/23/11   18:45:00          2 H                 (*)
domP                  VIOS43_2_BACKUP      Inc Bk     09/23/11   18:55:00          2 H                 (*)
domP                  VIOS43_3_BACKUP      Inc Bk     09/23/11   19:05:00          2 H                 (*)
domP                  VIOSP7_1_BACKUP      Inc Bk     09/23/11   19:15:00          2 H                 (*)
domP                  VIOSP7_2_BACKUP      Inc Bk     09/23/11   19:25:00          2 H                 (*)
domP                  VIOSP7_3_BACKUP      Inc Bk     09/23/11   19:30:00          2 H                 (*)
domP                  VIOSP7_4_BACKUP      Inc Bk     09/23/11   19:35:00          2 H                 (*)
domP                  VIOSP7_5_BACKUP      Inc Bk     09/23/11   19:40:00          2 H                 (*)
domP                  VIOSP7_6_BACKUP      Inc Bk     05/02/12   16:25:00          2 H                 (*)
das_DOM                dasPROD2_BACKUP      Inc Bk     07/24/12   15:47:00          1 H                 (*)
das_DOM                dasTEST2_BACKUP      Inc Bk     07/24/12   14:55:00          1 H                 (*)
das_DOM                dasWEB2_BACKUP       Inc Bk     07/24/12   15:40:00          1 H                 (*)

Open in new window

In order to see if it's well configured I would like to ask:

1- If you see there is a scheduled entry for every node AIX. Is that recommended or it's better to have a single scheduled for all nodes?

2- If it's better to have one sched for every node, which would be the recommended start/end time for these schedules?

I've searched on the web if there some kind of best practice for tsm scheduled but it was unsuccessful.
Who is Participating?
woolmilkporcConnect With a Mentor Commented:

one schedule per node seems a bit overdone.

Best use fewer schedules, each one responsible for a group of nodes.
Choose the number of nodes in a group in relation to the number of drives you have (if backup goes directly to tape) or in relation to the performance/throughput of your disk pools.

Also take care not to group nodes together which are in some way cooperating (for example, do not back up all your SAP application servers plus the SAP database server at the same time - unless they are all idle during the backup window you choose, of course).

As for the start times - first decide which are the least busy hours of your systems, then take care to distribute the start times across this window to avoid huge wait queues, and again, take the number of drives and the disk pool performance into account.

We at our company run all backups nightly off-shift (00:30 to 06:00) to keep the impact on production processes as low as possible and run the storage pool backups during the day, because tape drives are then free and client operations are not involved with this.

The schedule duration has two functions. First, the schedule will fail if the server could not start the backup during that time, and second, the randomization done by the server to scatter the start times is based on a percentage of the duration (see "Set Randomize" under dsmadmc).

sminfoAuthor Commented:
but suppose you have 10 nodes on one schedule that start at 00:30, how TSM which node is first executed? Or TSM server handle the schedule for these 10 nodes?

For  example this is schedule for this 10 nodes. I have added those 10 nodes to this schedule PRODUCTION_BACKUP:

domP                  PRODUCTION_BACKUP       Inc Bk     01/24/11   12:00:00          2 H 

Open in new window

Should I increase the duration from 2H to 6 H?
What happens if I add 10 nodes more to this schedule? How do I know that from 00:30 tp 06:00 there is time enough to make backups for 20 nodes?

SOrry if it's a stupid question, but not understand well how it works..
woolmilkporcConnect With a Mentor Commented:
Basically, the starting time of a client is scattered across a time window derived from the schedule duration (default 1 hour) and the randomization percentage, whose default is 25.

So, assuming the schedule duration of 2 hours and a default randomization then the backups of your 10 clients a started at some random point in time during the first 30 minutes past the schedule's start time.

>> What happens if I add 10 nodes more to this schedule? <<

Then not only ten but twenty nodes are started at some point in time during the mentioned 30 minutes.

It might very well be that a client is still busy when the next client comes into play, and that's why I suggested taking limiting factors like the number of drives or the storage pool throughput into account when choosing the size of the group.

>> How do I know that from 00:30 tp 06:00 there is time enough to make backups for 20 nodes? <<

How should I know? If each node has to back up terabytes of data to a total of one or two drives then there won't be time enough, obviously.

And why do you say "from 00:30 to 06:00"? I hope this has nothing to do with the schedule you showed above!
 I told you that I run several schedules between 0:30 and 6:00, not just one, and the design is such that the last schedule contains just small nodes so that everything will be done at ~ 06:30.

But what do you actually mean with "time enough"? The parameter "schedule duration" does not mean that a client process gets killed if it hasn't terminated during this time - it just means that TSM must have been able to successfully contact a client and to start its work. TSM doesn't care how long the actual backup would take.
sminfoAuthor Commented:
Ok Ok.. understand now...

No, the 00:30 to 06:00 is just as an example...;)

Thanks WMP!!
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.