Link to home
Start Free TrialLog in
Avatar of westhelpdesk
westhelpdesk

asked on

Replication Group Size Issue or not

was wondering if a replication group folders to big for fast replication...here is what i have...

Namespace and 3 target servers...3 replication groups...2 replication groups have one folder and then files under...few gig worth..if a server goes down users are redirected to new server with out issues....3 replication group has 10 folders and most likely 300-400 gig of data...i have all this data in one replication group which is about 10 shared folders...Question...

Was wondering if this replication group with all the data to much....or should i make say 5 replication groups with say 2 shared folders in each?? Would this make replication faster and more stable??? New to this on notice when the main server goes down for this replication group users data is not getting replicated fast enough....could the replication group itself be to big and need divide into smaller replication groups...Hope this makes sense...thanks for any help....
Avatar of Steve
Steve
Flag of United Kingdom of Great Britain and Northern Ireland image

there isn't really a size limit as such, but the spec of your server & broadband are likely to pose some bottlenecks.

large number/size of files may cause issues for the initial seeding/replication but don't really matter in the long term.
The number of changes per day are probably more relevant than the actual size because it's the changes that need to be replicated each day.

assess the number of files & their size that are likely to change each day and compare this to the bandwidth DFS can use to replicate (taking into account the schedules in place)

the server does have to process the USN journal and large number of folders/files can mean more processing, but you can easily monitor this on the server to check CPU/Mem/disk access is coping with the replication.
Avatar of westhelpdesk
westhelpdesk

ASKER

as far as resources on the system it is good their..not reaching half the load for servers...now i did find out that the staging area was to small...i am using the recommendation of...if 30gig is being replicated then i set staging folder to 60gig...correct??

1)Two targets are right next to each other on a gigabit switch so i don`t see this an issue...
2)The other target is on a 20mb pipe so not sure that would be an issue either..your thoughts on that???

as to your remark...assess the number of files & their size that are likely to change each day and compare this to the bandwidth DFS can use to replicate (taking into account the schedules in place)....How can i do this...

I am using DFSRMon gui to monitor files and replication..not to savy with command line so like gui more...do you know of another gui tool i can use for this?

Thanks for all your help!!
the 20mb pipe should be pretty good, but without knowing how much data is going to flow we can't really make any judgements on it.

as to your remark...assess the number of files & their size that are likely to change each day and compare this to the bandwidth DFS can use to replicate (taking into account the schedules in place)....How can i do this...
There are loads of ways. knowing the kind of data the company uses you can probably make a good guess.
You can use clever tools and monitors, but its far easier to do a windows search for modified/new files in the last 24 hours or something like that.

With regard to the staging folder.... 60GB seems far too large to me.
the staging are is a 'temp' folder which stores files in their compressed form prior to replication etc.
2 general rules of thumb:
Ensure the staging area is more than double the largest individual file you would expect to be replicated
Set the staging area at least double the size of your expected daily replication.
I am using the DFSRMon Tool and it shows current staging size in mb data being replicated...Now i have some folders where 30, 60,90 gig of data is being replicated at certain times of the day....so if i have 30 gig then i would need to set Staging quota to 60 gig...correct???

When i set staging quota on a folder that is replicating 30gig to say 45gig i always got high threshold exceeded(something like that)...once i set to 60 gig those went away....

You mentioned clever tools and monitors...do you know of any???

Thanks Again for all your help....
wow. that is a lot of replicated data. in your case a 60GB staging folder does sound about right.

the best tools are the ones built into DFS. looks like you've already found DFSRmon.

DFSRdiag is a good one too.
did a report and said 37 files not able to replicate...when i look for files it is showing files that are not there...any reason why some files wont replicate??? i know if users have open they wont......Why would it reference files that are not even there??? Makes no sense...any help is appreciated....
check the conflicts & deleted folder as any files that cause problem are moved into here.
#
Corrupted files, open files and files changed in more than one location at once can cause replication issues.
do you happen to know where this is stored??? thanks again..
by default its hidden in a folder called 'DFSprivate' located in the root of the replicated folder, but this can be moved so isn't always there.

DFS>Replicaiton>'folder'>memberships>Properties>Advanced will advise exactly where it is.
okay found folder and this is a ton of files in conflictanddeleted folder...like 760mb....how do i know what is good and what is not?? Can i just wait till there is no activity on the network and then delete all these files???

I would save to local storage to make sure i have copy of all the files in case someone needs them.....but would this give me a good starting point....or how should i go about finding what i need and dont need??? Thanks for all your help....
ASKER CERTIFIED SOLUTION
Avatar of Steve
Steve
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
okay was reading and they mentioned for deleting these files i can just stop dfsr service...delete files then restart service...can i delete the files off the main target server and have it replicate to all the other servers or do i have to do this on each target server...thanks again for your help....
okay so ended up stopping dfsr service on each target server then deleting all files and folders in the conflictanddeleted folders....then restarted dfsr service..now replication is flying and no errors.....thanks....