Link to home
Start Free TrialLog in
Avatar of marrowyung
marrowyung

asked on

tempdb on RAM for SQL 2019

hi,

I read this :https://sqlperformance.com/2019/08/tempdb/tempdb-enhancements-in-sql-server-2019

I am worrying if once restart SQL server can't start up, anything we can do to reverse the setting so we do not use tempdb on RAM ?

ASKER CERTIFIED SOLUTION
Avatar of Raja Jegan R
Raja Jegan R
Flag of India image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of marrowyung
marrowyung

ASKER

single user mode help ?

our SQL server 2019 just turned on the tempdb metadata on RAM but it is on AWS and stopping it very slow!

and finally it can restart for tempdb on RAM .
Yes, Single User mode should definitely help as it is the only way to restore system databases.
May I know whether you have SQL Server instance at a Server level or similar to Azure SQL instance. (Haven't been to AWS environment yet)
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Yes, Single User mode should definitely help as it is the only way to restore system databases.

I am worrying about what if this one also do not help .....

Yes, Single User mode should definitely help as it is the only way to restore system databases.

server level ! not cloud DB.

have you tried this feature ?



arnold ,

= NO.T;

what is that ?

any technical known issue and concern we should know on tempdb on RAM?
any special consideration like before we restart anything more need to do as we have tempdb on RAM ON ?

like in that post:
https://sqlperformance.com/2019/08/tempdb/tempdb-enhancements-in-sql-server-2019

In addition, it’s not known if the memory-optimized tables are durable (SCHEMA_AND_DATA) or not (SCHEMA_ONLY). Typically this can be determined via sys.tables (durability_desc),


Should have been NOT , see NOTE: where Erin included the directive on how to enable.
Why not spin up a separate VM with SQL and test.

You have ram contention in Ram for resources.
You have ram contention in Ram for resources
we will test it
but known issue and concern can't be test, right?

You are talking about resource exhaustion, RAM in this case?

You could test using cte loading a large data set.
You could test using cte loading a large data set.
you are saying if we ALSP use CTE there will be more RAM contention and moving tempdb metadata to RAM might not a good idea?

We just worry about tempdb slow as it is on shared AWS disk! but if it is shared on RAM wiht other AWS client then it can be better! we will know the result tomorrow.

other department just say overall is slow but don't know which part is slow so we need to do something can improve the OVERALL and easy to rollback if any.

we also trying SQL 2019 compatible mode per user DB we know and see if it helps.



Much depends on your utilization.

The article you provided outlines the benefit in performance improvement, but you have to account for it in ram allocation  to VM.

I am uncertain the analysis presented handles analysis and implications between VM physical.
And resource contention.

Your comment, illustrates the point, I.r. Your VM ram allocation, when you activated the in ram, reduces the allocation, where if you add RAM the VM to account for the possibility of a higher demand.

The illustration of how the change benefits the scenario, testing included by Erin.

Everything requires testing based on your soecific use, deployment.
where if you add RAM the VM to account for the possibility of a higher demand.

I consider this before and we may jump from 256GB to 512GB.

Everything requires testing based on your soecific use, deployment.

any thing to verify if there are problems cause by it after deployment other than test the system?
event log ?


SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
RAM being "cheaper" no more PAGE=2xRAM.

Depending on the issue, the page file can be fixed versus letting windows manage it.
2-9GB and look at the stats what it suggests after use.
underlying storage ...

Look at the advanced settigns of the server, go through the performance ..... and see what you have allocated for cache and what it says will be beneficial.


RAM usually is the quickest improvement, but much depends on your scenario and what your users are complaining about.
at times, issue is with the amount of data that has to be assembled and displayed in the browser.
at times, the initial design did not limit the working table to only current active data such that over time it is now 1,000,000 rows with only 40 active and the users complaining it is taking long to switch from one record to the next.

Look at using SQL tunning/tracing/profiling  tools
To capture a trace during high, peak load. and then see what recommendations it is making in terms of adding index, statistics, etc.

discussing such things in the abstract is difficult.

with each upgrade cycle, netowrk/FC connections get faster, storage components similarly get faster sas ssd/m.2 etc....
Raja Jegan  ,

how much paging file have you configured on this server for 256 GB RAM and how much paging file you are planning to allocate for 512 GB RAM.

you mean windows page file ?

Memory optimized metadata tempdb feature improves the performance of #temp tables and degrades the performance of @table variables. 

but they both use tempdb, right ?

actually I just turn on this for one of our DB server and performance is ok, but how can we verify the SQL server, AFTER running tempdb meta data on RAM,  SQL operate good ? any event id related to this ?

Hence identify the slowly performing queries and try to investigate it manually once and advice your developers to use #temp tables instead of @table variables 

Are you saying we should not use this feature but keep identify daily slow queries and tune it accordingly ?

arnold ,

Depending on the issue, the page file can be fixed versus letting windows manage it.
2-9GB and look at the stats what it suggests after use.
underlying storage ...

Why related to Windows page files? for server best practice is page file = 1.5 x size of the RAM ?

Look at using SQL tunning/tracing/profiling  tools
To capture a trace during high, peak load. and then see what recommendations it is making in terms of adding index, statistics, etc.


yes, but tempdb meta data on RAM is for the OVERALL improvement, right? this is for each queries ONLY!


HI both,

Actually I have done that for one of the SQL server 2019 enterprise, and the result is good ! the performance overall improved for days! we will wait for one more week to see if there are problems and we might turn on this setting for the other one.

but concern is:
1) how can we know this feature is looking good ? anything to check even SQL server works good ?
2) any known potential issue on Server level side ?
3) when restart, as data in tempdb can be still on RAM, when shut down, any chance of data lost?
4) AFTER restart, this configuration still enabled ?
>> you mean windows page file ?
Yes, I was just asking to see how much is configured out.

>> but they both use tempdb, right ?
No, only #temp table uses Tempdb whereas @table uses RAM memory.

>> Are you saying we should not use this feature but keep identify daily slow queries and tune it accordingly ?
No, you can use Memory optimized tempdb feature but can possibly reduce the unwanted workload on RAM and push it to tempdb for better performance of SQL Server.

>> 1) how can we know this feature is looking good ? anything to check even SQL server works good ?
Yes, check the below factors.
1. Check the performance of the SQL Server during peak timings or time with lot of concurrent operations to see how it responds.
2. Check the complicated report or other queries that uses lots of #temp tables or @table variables and measure the performance before and after using the Memory optimized tempdb.

2) any known potential issue on Server level side ?
Shouldn't be any major potential issues if the Server is configured properly.

3) when restart, as data in tempdb can be still on RAM, when shut down, any chance of data lost?
With or without Memory optimized tempdb on RAM, data loss as such won't happen provided applications can handle the sudden restart of the application.
Purpose of tempdb is to store all temporary objects and hence restart as such wouldn't have any data loss provided you use it wrongly to store permanent data.

4) AFTER restart, this configuration still enabled ?
Yes, it will be enabled as long as you disable this feature.
Yes, I was just asking to see how much is configured out.

why is it related ?
No, only #temp table uses Tempdb whereas @table uses RAM memory.
but when not enough RAM, it will use tempdb?  and it degrades the performance of @table variables because ..?

ou can use Memory optimized tempdb feature but can possibly reduce the unwanted workload on RAM and push it to tempdb for better performance of SQL Server.

so use more #temp instead of @temp, right?

but they shared the SAME layers of RAM ,right ? seems not very ....



>> why is it related ?

No, they aren't related but just wondering to see how much Paging file was configured on a server with 256GB RAM.

>> but when not enough RAM, it will use tempdb?  and it degrades the performance of @table variables because ..

@table variables are recommended only to store very lesser data sets, if you are going to store larger data sets, then use #temp table instead of @table variables.

>> but they shared the SAME layers of RAM ,right ? seems not very ....

To clarify, across all versions of SQL Server both metadata of #temp tables and @table variables will be created on tempdb and data will be pushed from Memory to tempdb for @table variables in normal usage.
With the new memory optimized tempdb metadata feature, we are optimizing the metadata of tempdb database in RAM for faster access. As you can see the name here we are optimizing the metadata of tempdb in memory and not the data and hope that clarifies your question on when the data will be pushed to tempdb disk or memory.
@table variables are recommended only to store very lesser data sets, if you are going to store larger data sets, then use #temp table instead of @table variables.
yeah, but as tempdb data still on disk, why should we do that?
or might be tuning this ON do not help much performance in our case!

To clarify, across all versions of SQL Server both metadata of #temp tables and @table variables will be created on tempdb and data will be pushed from Memory to tempdb for @table variables in normal usage.

so @varible also use tempdb, mostly not on RAM?
Didn't get your question clearly..
To summarize, even though metadata of #temp tables and @table variables reside in tempdb database data of #temp table will reside in tempdb whereas @table variable data will reside in RAM.
If we enable Memory Optimized Tempdb database feature, then metadata of both #temp table and @table variable will be cached in RAM for faster access and data will be residing in tempdb for #tempdb table and RAM for @table variable.
Hope this clarifies.
If we enable Memory Optimized Tempdb database feature, then metadata of both #temp table and @table variable will be cached in RAM for faster access and data will be residing in tempdb for #tempdb table and RAM for @table variable.
Hope this clarifies.

good.
tks.

any script to find out the usage of tempb data on RAM ? to monitor it e.g..

SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial