[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 197
  • Last Modified:

Total transfer size for database mirroring over a period of time?

We've got mirroring set up on 5 production hosts (4 are SQL2008R2, 1 is SQL2008) all transferring data (asynchronous) to an offsite mirroring server (SQL2008R2).  We have a 50Mb connection for this and maybe hit 20Mb of usage at peak times.  We're looking at moving the mirroring for ~30 databases on one of these production hosts to a different offsite host, but want to know more accurately how much data we're mirroring over a 24hour period (or 12 hour period, or 3 day period, etc).

We're using msdb.sys.sp_dbmmonitorresults to see some data and to monitor the mirroring process, but it isn't telling us what we want to know.  :) Yes, knowing how much data is in the queue and what the recovery rate is is great, but I'm looking for total transfer size over a given time.

Everything I've looked at, even using 'perfmon', seems to want to give averages (bytes/sec sent, bytes/sec rec, etc.), which is too inaccurate for my needs.

Am I going about this the wrong way?  I assumed some DMV somewhere would tell me how many bytes were mirrored in the last hour (and then I could easily set up something to grab this every hour).  Usually I would just look at my transaction log backups (which we do every half hour) and just add those up over 12/24 hour periods as a good estimate, but we're using CommVault for our backups and I don't trust the numbers its giving me (since it does its own de-dupe and maybe its own compression - I don't know).

Any thoughts/ideas?
0
nemws1
Asked:
nemws1
  • 3
1 Solution
 
nemws1Author Commented:
I found that I can use the numbers from the backup reports in the [msdb] database to find non-compressed backup sizes (from the transaction log backups).  Here's what I'm using and its giving me the numbers I need:

USE msdb;
GO

SELECT CONVERT(VARCHAR(10), bs.backup_start_date, 120) AS DATE
    , SUM(bs.backup_size) / (1024 * 1024) AS SizeSumMB
    --, bs.database_name
FROM dbo.backupmediafamily AS bmf
    JOIN dbo.backupset AS bs
        ON bmf.media_set_id = bs.media_set_id
WHERE (CONVERT(DATETIME, bs.backup_start_date, 102) >= GETDATE() - 14)
    AND bs.type = 'L'
GROUP BY CONVERT(VARCHAR(10), bs.backup_start_date, 120)
    --, bs.database_name
ORDER BY CONVERT(VARCHAR(10), bs.backup_start_date, 120)
    --, bs.database_name
;

GO

Open in new window

0
 
Ryan McCauleyCommented:
I'd agree that looking at the size of your database logs is going to be the best bet for estimating how much data you're sending for mirroring. You're effectively doing log shipping to the remote locations (not exactly, but it's a good approximation), so if you're generating 100MB of logs/hour (2.4GB/day), and the logs are generated evenly throughout the day, you'd need 250kbit for each location you want to send the logs (10 remote servers = 2.5 mbit). That assumes even log generating - if you've got heavy periods where logs are generated at twice the speed and you don't account for that, you'll end up with mirror lag during those times.

It's not exact, and it assumes no compression is happening along the way, but it's always the ballpark I've used and it's served me well.
0
 
nemws1Author Commented:
Exactly the calculations I was doing.  And yes, we have "hot times", but I'm not too concerned, as long as the mirroring server catches up at some point during the day (or early morning or whatever).
0
 
nemws1Author Commented:
Thanks for the info - much appreciated!
0

Featured Post

[Webinar] Cloud and Mobile-First Strategy

Maybe you’ve fully adopted the cloud since the beginning. Or maybe you started with on-prem resources but are pursuing a “cloud and mobile first” strategy. Getting to that end state has its challenges. Discover how to build out a 100% cloud and mobile IT strategy in this webinar.

  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now