Solved

Consolidating all SQL Server into 2012 standard caveats ?

Posted on 2013-10-29
27
534 Views
Last Modified: 2013-11-01
Hi people,

Due to the limited budget that we have, I'm going to consolidate all if my SQL server 2005 and 2008, and 2008 R2 enterprise into 2012 standard edition.

The new server is very powerful with 256 GB RAM and 64 CPU hyper threaded.

What are the caveats and the best suggestion to maximize the availability and avoid any conflicts with the TSQL stored procedures ?

Thanks
0
Comment
  • 11
  • 10
  • 5
  • +1
27 Comments
 
LVL 35

Expert Comment

by:David Todd
ID: 39610430
Hi,

A standard thought is to turn off the hyperthreading. The logic goes like this:
If CPU isn't loaded, then it makes no difference if its on or off.
If the CPU is loaded, it could slow some processes down when they are scheduled for a hyper threaded core, and in fact have to wait for the real one to come free.

Regards
  David
0
 
LVL 35

Assisted Solution

by:David Todd
David Todd earned 389 total points
ID: 39610431
PS Of course run the upgrade adviser across your current 2005 & 2008 databases.
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 39610433
Can I deploy 2x named instance ?
http://technet.microsoft.com/en-us/library/ff642522.aspx since the limitation is 64 GB for Standard edition ?
0
 
LVL 35

Assisted Solution

by:David Todd
David Todd earned 389 total points
ID: 39610434
Hi,

Technically yes, you should look at somewhere between 2 and 4 named instances to use available ram.

On the other hand, think about one instance of enterprise edition.

Put both scenarios in front of your local MS licencing rep, and cost them.

HTH
  David
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 39610549
Yes, I have contqacted them and the price is way to high, that is why we go ahead with the Standard Edition.

Mr. Todd, is it possible to assign the first 64 GB to the first instance and then the next 60 GB to the second instance, leaving the OS to use 4 GB ?
0
 
LVL 34

Assisted Solution

by:Brian Crowe
Brian Crowe earned 56 total points
ID: 39610654
Yes, you can set max memory per instance.
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 39610754
Cool, so how do I make sure that the SQL Server Standard Edition can use / utilize the first 64 GB and then the second instance use the next available 60 GB ?

My understanding is that I need to install the second named instance using the different IP address.
0
 
LVL 35

Assisted Solution

by:David Todd
David Todd earned 389 total points
ID: 39610842
Hi,

I suggest that you are over thinking things: You don't have to explicitly assign a 60GB block to SQL instance A and the next block to instance B.

Just use the max memory setting to regulate the amount of ram that each instance grabs.

So, you can have instances a - d all set with max memory at 60GB, leaving 16GB for the OS.

As you begin to migrate to this new box, I suggest that you test a large process - measure the execution time, with everything from 30GB - 64GB of ram assigned to the sql instance. I suggest that the difference between 60 & 64GB might not be noticeable.

>> different ip address
Yes and no.

Strictly speaking the ip is the same, but each instance listens on a different port, all resolved (most times) by the SQL Browser service. That is, unless you have cross-domain/firewall/WAN issues, don't worry about it.

HTH
  David
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 39610848
ok, but in this case separating the instance into named instance based on the workload type can give me a performance improvement or not so much at all ?
0
 
LVL 35

Assisted Solution

by:David Todd
David Todd earned 389 total points
ID: 39610851
Hi,

Given that you have 256GB of ram, I suggest that being able to use all that ram with up to 4 instances of SQL is where you would get the performance benefit - as in workload a could be cached to the tune of a large slice of 60GB, and workload b ... workload d the same, instead of all four workloads being jammed through 64GB of ram.

HTH
  David
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 39610860
cool, so I assume that each of the instance is smart enough to grab its fair share of free memory available.
0
 
LVL 35

Assisted Solution

by:David Todd
David Todd earned 389 total points
ID: 39610951
Hi,

To make sure that sufficient memory is grabbed, do set the min memory to a reasonable figure as well.

Until you get it all figured out, I'd start with somewhere between 20 - 30GB as a minimum, and look at 60GB as the maximum.

Without a sensible minimum, SQL can be a little slow to grab memory - it only grabs it as it needs it.

HTH
  David
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 39611089
@DTodd,
So in this case with the following implementation it is possible:

Workload: OLTP - Log
Name: PRODSQL1\Instance1
IP: 10.1.1.1
Port: 1433
Set Minimum RAM: 62 GB

Workload: OLTP - Data
Name: PRODSQL1\Instance2
IP: 10.1.1.2
Port: 1433
Set Minimum RAM: 62 GB

Workload: Decision Support System
Name: PRODSQL1\Instance3
IP: 10.1.1.3
Port: 1433
Set Minimum RAM: 62 GB

Workload: Infrastructure and Other DB
Name: PRODSQL1\Instance4
IP: 10.1.1.4
Port: 1433
Set Minimum RAM: 62 GB
0
Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

 
LVL 35

Accepted Solution

by:
David Todd earned 389 total points
ID: 39612486
Hi,

You don't need to set multiple ip addresses.

They listen on different ports.

On my workstation, as of this boot, I have
Instance: SQL2008R2
TCP Dynamic Ports: 63999
Listen All: Yes

Instance: SQL 2012
TCP Dynamic Ports: 49969
Listen All: Yes

(Yes, I have a limited imagination when it comes to instance names)

Only start with explicitly configuring ips and ports if there are firewall considerations, in that you need holes in firewalls and so dynamic ports don't work.

http://support.microsoft.com/kb/823938

I would be reluctant to set minimum ram at the limit right out of the box. I'd set it around the 20 -32GB at this stage, and adjust as needed.

OLTP-Log and OLTP-Data mightn't need much ram compared to the decision support system, but due to the need to have fast throughput must not have bottle-necks.

So I'd do the following:
Workload: OLTP - Log
Name: PRODSQL1\Instance1
IP: All
Port: Dynamic
Set Minimum RAM: 48GB
Set Maximum RAM: 62GB

Workload: OLTP - Data
Name: PRODSQL1\Instance2
IP: All
Port: Dynamic
Set Minimum RAM: 48GB
Set Maximum RAM: 62GB

Workload: Decision Support System
Name: PRODSQL1\Instance3
IP: All
Port: Dynamic
Set Minimum RAM: 20GB
Set Maximum RAM: 60GB

Workload: Infrastructure and Other DB
Name: PRODSQL1\Instance4
IP: All
Port: Dynamic
Set Minimum RAM: 20GB
Set Maximum RAM: 60GB

Also pay attention to the disk layout.

Regards
  David
0
 
LVL 69

Assisted Solution

by:ScottPletcher
ScottPletcher earned 55 total points
ID: 39612649
>> A standard thought is to turn off the hyperthreading. The logic goes like this:
If CPU isn't loaded, then it makes no difference if its on or off. <<

Hmm, I've never heard that: who suggests that it's "standard" to turn off hyperthreading, esp. for SQL 2012?

Also, if CPUs are not loaded, it could be a huge advantage for a given task to use hyperthreading.



Also, be careful not to over-allocate RAM, especially just starting out; keep in mind:

1) the OS needs a fair amount of RAM itself to manage the 256GB RAM.

2) any other tasks besides just Windows, such as Profiler, require RAM also.

3) SQL could previously exceed SQL's internal "max memory" setting, because RAM outside the SQL buffer pools was not included.  I don't know if these external RAM restrictions are "fudged" by SQL or not, but it might be worth verifying before assigning all available RAM.

Typically you start lower and then allocate more RAM if it is left available in the server.

There's also a practical psychological reason for this: if you over-allocate initially, and an instance(s) get great performance, but then you have to cut back RAM, people might see a slight slowdown, which can be bad for you psychologically.
0
 
LVL 35

Expert Comment

by:David Todd
ID: 39612665
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 39613063
@DTodd: Thanks for the reply and explanation. So in this case the conlsuison are as follows:

With four instances installed in the same server, it uses random TCP Dynamic Ports to give access to each named instances.

New IP address only needed if each of the default instance needs to use port 1433 specifically and for access from different network zone which must be opened by the Firewall.

@ScottPletcher: Cool, I'll start from the lower limit eg. 50 GB for each of the instances and htn increasing it more on needed basis. Obviously there will be down time for this one due tp the Standard Edition --> am I correct ?
0
 
LVL 35

Assisted Solution

by:David Todd
David Todd earned 389 total points
ID: 39613094
Hi,

Nope. Don't even go there with port 1433 as this is where the SQL Browser sits.

The suggestion is to open port 1433 and fix the port that SQL uses. Where it shows the dynamic port, copy this value to the fixed port and clear the dynamic one.

Now that you know which port this instance uses, open this port on the firewall.

HTH
  David
0
 
LVL 7

Author Closing Comment

by:Senior IT System Engineer
ID: 39613125
Thanks !
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 39614332
That's an old link given the rather rapid changes in hyper-threading hardware and software.

http://sqlblog.com/blogs/linchi_shea/archive/2012/01/28/no-respect-numa-affinity-meets-query-parallelism.aspx

Still, there is certainly an on-going discussion in SQL circles with regard to this option.

I suggest first allowing hyper-threading in the hardware and using MAXDOP 4 in SQL and see how that works.  [Btw, parallelism seems especially helpful on index rebuilds.]


You can change memory settings without any down time.  Simply adjust the memory settings, and SQL will gradually comply with them; technically I don't think you even need to run a RECONFIGURE command.
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 39615331
Thanks Scott I believe the server BIOS by default enables the Hyper Threading.

In total from the task manager I can see that there are 64 logical processors in the performance tab on this server.
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 39615384
I'm almost certain you're right, by default hyper-threading will be in the BIOS; thus,  you probably won't need to make any changes there.

The critical number for MAXDOP is (physical) cores.  You never want to allow MAXDOP to exceed the number of physical processors.  That means you can't leave MAXDOP at the default 0 either, since it will assume the max, 64 in your case.

[Btw, if you do decide to completely disable hyperthreading for SQL -- not a good idea IMO -- then you should probably go ahead and disable it in the hardware as well, but that would require a restart of the machine.]
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 39615402
Cool, so in this case the Maxdop should be 32 as per the logical processor total ?
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 39615414
Hmm, I doubt you have that many physical processors.

The MAXDOP value should NEVER exceed the number of physical processors.

I think that's one reason some people see so many issues with hyper-threading: they allow MAXDOP to be above the number of physical processors, which can cause severe issues in processing.
0
 
LVL 7

Author Comment

by:Senior IT System Engineer
ID: 39615433
Ah I see so it means the physical CPU sockets then, it is 4

Many thank you Scott :-)
You are truly helpful to my learning journey in SQL
0
 
LVL 35

Expert Comment

by:David Todd
ID: 39615570
Hi,

Physical processors (or physical cores)  != sockets != to number of cores with hyperthreading turned on.

I would suggest that if you have big reports running against an oltp database, that if the transaction entry users have slow downs because of the reports, that you reduce maxdop even further.

Regards
  David
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 39617078
I wouldn't arbitrarily assume it's parallelism that is causing slowness until/unless you verify it.  Slowness could be caused by any number of things.

But if you notice that "CXPACKET" waits are high and seem to be causing issues, then, yes, reduce MAXDOP somewhat and see if it helps.
0

Featured Post

Control application downtime with dependency maps

Visualize the interdependencies between application components better with Applications Manager's automated application discovery and dependency mapping feature. Resolve performance issues faster by quickly isolating problematic components.

Join & Write a Comment

In this article I will describe the Copy Database Wizard method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
JSON is being used more and more, besides XML, and you surely wanted to parse the data out into SQL instead of doing it in some Javascript. The below function in SQL Server can do the job for you, returning a quick table with the parsed data.
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
Via a live example combined with referencing Books Online, show some of the information that can be extracted from the Catalog Views in SQL Server.

762 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

17 Experts available now in Live!

Get 1:1 Help Now