<

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x

SQL Server Compression - Pros and Cons

Published on
14,743 Points
8,443 Views
3 Endorsements
Last Modified:
Approved
Data compression has been around for years and while it’s not new technology it is new to SQL Server.  Data compression was first introduced in SQL Server 2008 and is only available in the Enterprise Edition.    Compression achieves two main objectives.  First, it can significantly reduce the size of physical database files and second, using index compression, performance can be vastly improved.    The second objective is what will be addressed in this article.

As compression increases the amount of data that is stored at the page level of an index, SQL Server has to perform less I/O operations to get the data it requires.   The number of I/O operations can be drastically reduced, as an example later will demonstrate.
Once compression has been enabled for an index, everything else is automatically managed by SQL Server.  The Storage Engine is responsible for handling compression.   When data is passed to the Storage Engine, it compresses and stores the data in compressed format.  When data is passed back from the Storage Engine to another part of SQL Server, it decompresses the data.

Compressing and decompressing the data does require additional CPU overhead, but normally the increase in I/O performance more than makes up for the CPU costs.  If your server is already struggling with high CPU usage, then a careful study of compression needs to be done to determine if is beneficial, otherwise it is most likely an excellent exercise to perform to boost performance.

•      Example
Compressed indexes can be created either when the index is first created using the CREATE INDEX … (WITH DATA_COMPRESSION = Level) where Level can be either ROW or PAGE, or after the index is created using the ALTER INDEX command, again specifying the Level.  This example will use PAGE level compression as this is where the major performance gains can be achieved.
The following query will be executed on a table with approximately 18 million rows.  

SELECT [CallID]
      ,[UniqueID]
      ,[CustomerID]
      ,[RetailRate]
      ,[CallCost]
      ,[CallTax]
      ,[CallTotal]
  FROM [DTH_CallRecordMaster] WHERE CustomerID = 'XXXX'
  ORDER BY CallCost, CallTotal DESC
  COMPUTE SUM(CallCost)

Initially, there is a Nonclustered, Uncompressed index on CustomerID, CallCost, and CallTotal with Included columns CallID, UniqueID, RetailRate, CallTax

The following execution plan is produced:
 
Logical Reads:  874

Compression is then enabled on the same index using the ALTER INDEX command.
Executing the exact same query will produce an identical execution plan.
 
Logical Reads:  537

The important thing to notice is the difference in Logical Reads.  Just by enabling index compression I/O has improved by 61%.

•      Conclusion
Index compression can have a dramatic improvement in performance by reducing the I/O necessary to find data and also allow more data to be held in cache, again reducing I/O operations.  There is CPU overhead involved in the compression/decompression of data, and if your server CPU cycles are already stressed, then compression needs to be studied carefully.  Generally though, compression is a good practice as it can improve performance with practically no work required.  However, there is work involved analyzing which objects in the database would benefit from compression, weighed against the increased CPU cost and therefore still needs careful analysis of the data and the server environment.

This Article also appears on our website at : http://www.sqlxl.com/index_compression.php
3
Author:sqlxl
0 Comments

Featured Post

Active Protection takes the fight to cryptojacking

While there were several headline-grabbing ransomware attacks during in 2017, another big threat started appearing at the same time that didn’t get the same coverage – illicit cryptomining.

Via a live example, show how to extract insert data into a SQL Server database table using the Import/Export option and Bulk Insert.
Stellar Phoenix SQL Database Repair software easily fixes the suspect mode issue of SQL Server database. It is a simple process to bring the database from suspect mode to normal mode. Check out the video and fix the SQL database suspect mode problem.
Other articles by this author

Keep in touch with Experts Exchange

Tech news and trends delivered to your inbox every month