Solved

Inserts running very slow from Application to Database in SQL Server 2012

Posted on 2016-10-11
5
140 Views
Last Modified: 2016-11-09
Hi,

We are facing slowness issue with Application Jobs. The Application job inserting records in SQL Server 2012 table (Table contains 100 million records) through Application.

No Primary key, no indexes on table.

Lastwaittype :

Writelog : 74%
Asynch_network_io: 15%

Please suggest fot troubleshooting.

Thanks,
Chandra
0
Comment
Question by:Chandra Mohan Kanithi
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 2
  • 2
5 Comments
 
LVL 13

Accepted Solution

by:
Nakul Vachhrajani earned 250 total points
ID: 41839042
Hello!

Why don't you have any primary key and clustered indexes on the table?

Right now, you have a heap and that would mean that your selects basically do entire table scans, inserts & updates lock up the entire table most of the time due to lock escalations arising out of fragmentation.

1. The first step therefore would be to determine why you can't have clustered indexes on the table. If there is no other reason, suggest you define a clustered index on the table
2. Next would be checking your file growth numbers (both for your DB and for tempdb). If the process has to provision additional space from the OS, getting the space provisioned would require time and resources. Solution to this would be to have a higher initial size and higher file growth value (in fixed MB/GB, not in percentage)

Do share your inputs on these thoughts. We can start looking at the code once the design and environment is optimal.
0
 
LVL 13

Expert Comment

by:Nakul Vachhrajani
ID: 41839061
Also, update the statistics on your table.
0
 
LVL 27

Expert Comment

by:Zberteoc
ID: 41839161
1. Make sure that the growth setup for the log file and data file is set in MB and not percent and that is at least 512MB or even 1 GB.

2. Make sure that the VLF number in the log file is not bigger than 1000. If that is the case you will have to change it. You can use the script below to check the VLF size on all the databases on your server. The bottom part, commented, it to be used to change the VLF size:
-- modify size:
-- ALTER DATABASE [CustomerDataStore] MODIFY FILE ( NAME = N'Accounts_log', SIZE = 16777216KB , FILEGROWTH = 524288KB )


-- ##### SQL Version 2012; for before 2012 see bottom part
DECLARE @query varchar(1000),
 @dbname varchar(1000),
 @count int

SET NOCOUNT ON

DECLARE csr CURSOR FAST_FORWARD READ_ONLY
FOR
SELECT name
FROM sys.databases

CREATE TABLE #loginfo
(
 dbname varchar(100),
 VLFs int)

OPEN csr

FETCH NEXT FROM csr INTO @dbname

WHILE (@@fetch_status <> -1)
BEGIN

CREATE TABLE #log_info
(
 RecoveryUnitId tinyint,
 fileid tinyint,
 file_size bigint,
 start_offset bigint,
 FSeqNo int,
[status] tinyint,
 parity tinyint,
 create_lsn numeric(25,0)
)

SET @query = 'DBCC loginfo (' + '''' + @dbname + ''') '

INSERT INTO #log_info
EXEC (@query)

SET @count = @@rowcount

DROP TABLE #log_info

INSERT #loginfo
VALUES(@dbname, @count)

FETCH NEXT FROM csr INTO @dbname

END

CLOSE csr
DEALLOCATE csr

create table #logUse(dbname varchar(100),LogSizeMB decimal(10,2), LogUsePerc  decimal(10,2), Stat bit)
insert into #logUse EXEC('dbcc sqlperf(logspace)')

SELECT 
	i.dbname,
	LogSizeMB,
	LogUsePerc,
	VLFs as VLFs
FROM 
	#loginfo i
	inner join #logUse u
		on u.dbname=i.dbname
--WHERE VLFs >= 50 --My rule of thumb is 50 VLFs. Your mileage may vary.
ORDER BY VLFs desc

DROP TABLE #loginfo
drop table #logUse


/*
-- ##### !!! DO NOT RUN DURING PEAK HOURS !!! #####
-- ##### How to change VLF number

USE <db_name> --Set db name before running using drop-down above or this USE statement

DECLARE @file_name sysname,
@file_size int,
@file_growth int,
@shrink_command nvarchar(max),
@alter_command nvarchar(max)

SELECT @file_name = name,
@file_size = (size / 128)
FROM sys.database_files
WHERE type_desc = 'log'

select @file_name,@file_size

SELECT @shrink_command = 'DBCC SHRINKFILE (N''' + @file_name + ''' , 0, TRUNCATEONLY)'
PRINT @shrink_command
EXEC sp_executesql @shrink_command

SELECT @shrink_command = 'DBCC SHRINKFILE (N''' + @file_name + ''' , 0)'
PRINT @shrink_command
EXEC sp_executesql @shrink_command

SELECT @alter_command = 'ALTER DATABASE [' + db_name() + '] MODIFY FILE (NAME = N''' + @file_name + ''', SIZE = ' + CAST(@file_size AS nvarchar) + 'MB)'
PRINT @alter_command
EXEC sp_executesql @alter_command
--*/

Open in new window

To understand the VLF numbers check these 2 links:

http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/

http://www.sqlskills.com/blogs/paul/important-change-vlf-creation-algorithm-sql-server-2014/

Also what Nakul mentioned is important. You will have to create a PK on the table and make sure you use a column or a combination of columns that is always growing, i.e, one that is or has an ID column or an inserted date column at the beginning.
0
 
LVL 50

Assisted Solution

by:Vitor Montalvão
Vitor Montalvão earned 250 total points
ID: 41846354
We are facing slowness issue with Application Jobs.
(...)
 No Primary key, no indexes on table.
The slowness is only during insert?
Mind that indexes are needed for all non insert DML commands. If you only perform INSERTs in the table then it's acceptable to not have indexes but then what for you need a table where you don't perform any kind of queries?
0
 
LVL 50

Expert Comment

by:Vitor Montalvão
ID: 41866663
Chandra, a feedback will be appreciated.
0

Featured Post

Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Use this article to create a batch file to backup a Microsoft SQL Server database to a Windows folder.  The folder can be on the local hard drive or on a network share.  This batch file will query the SQL server to get the current date & time and wi…
For both online and offline retail, the cross-channel business is the most recent pattern in the B2C trade space.
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
Via a live example, show how to extract information from SQL Server on Database, Connection and Server properties

729 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question