Solved

SQL 2005 Delete rows

Posted on 2014-02-13
17
557 Views
Last Modified: 2014-02-17
I have a 2005 sql table that has 250,000 rows.  We only work with the last 1000 or so rows.    When it gets over 100,000 rows things start to slow down.  I have been going in and deleting the rows by opening the table in  sql management studio.  If I delete more than 1000 at a time it takes forever and most of the time just locks up.  

I read some stuff about truncating, but that seems to erase the whole table.  Is there a query I can run that will delete the rows in chunks without locking up sql?  Any other advice?

Thanks!
0
Comment
Question by:Tom_Hickerson
  • 7
  • 6
  • 2
  • +2
17 Comments
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
Comment Utility
Does the table have a clustured index? If not create one.
Do you select the rows in gui? Don't, use a DELETE tablename WHERE .. statement.
Further tuning would be to index and eventually partition the table according to the fields based on which you want to decide which rows to keep/delete
0
 
LVL 33

Expert Comment

by:paulmacd
Comment Utility
As noted by [Guy Hengel], an index may go a long way toward addressing your performance problems, if you're not using one already.  If you have an index column, you could - again as already suggested...
   DELETE FROM tablename WHERE indexcolumn < somevalue

You could do something similar if you have a date column.
0
 
LVL 1

Author Comment

by:Tom_Hickerson
Comment Utility
I have a date time field.  Here is the table

...GO
/****** Object:  Table [dbo].[opt_logs]    Script Date: 02/13/2014 13:56:24 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[opt_logs](
      [id] [int] IDENTITY(1,1) NOT NULL,
      [tag] [varchar](200) NOT NULL,
      [offset] [float] NOT NULL,
      [front_trim] [float] NOT NULL,
      [end_trim] [float] NOT NULL,
      [end_of_log] [float] NOT NULL,
      [butt_first] [bit] NOT NULL,
      [allow_export] [bit] NOT NULL,
      [selected_solution_id] [int] NULL,
      [settings_id] [int] NOT NULL,
      [datetime] [datetime] NOT NULL,
      [sort_solution] [varchar](200) NULL,
      [tag_first] [bit] NULL,
 CONSTRAINT [PK_opt_logs] PRIMARY KEY CLUSTERED
(
      [id] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [dbo].[opt_logs]  WITH CHECK ADD  CONSTRAINT [FK_opt_logs_opt_settings] FOREIGN KEY([settings_id])
REFERENCES [dbo].[opt_settings] ([id])
GO
ALTER TABLE [dbo].[opt_logs] CHECK CONSTRAINT [FK_opt_logs_opt_settings]
0
 
LVL 69

Expert Comment

by:ScottPletcher
Comment Utility
You definitely do need to add a clustered index to the table if it doesn't have one.  To determine that, run the commands below and post the results:

USE <your_db_name>
EXEC sp_helpindex '<your_table_name>'


>> I have been going in and deleting the rows by opening the table in  sql management studio. <<

Because of the way the SSMS interface works, you definitely should not do that .


>> Is there a query I can run that will delete the rows in chunks without locking up sql?  Any other advice? <<

Not a single query but a script.  Once we know the clustering key, I can provide the script for you.
0
 
LVL 69

Expert Comment

by:ScottPletcher
Comment Utility
OK, just saw the table definition.

How do you reference this table?  Is it by datetime or by id?

Often the clustered index on built on an identity column just by default, for no reason at all (except that some "experts" strongly push that idea).

If you reference the table by datetime, then change the clustered index to datetime.  You will get vastly better performance out of the table, whether it is 20,000 rows or 500,000 rows.


ALTER TABLE opt_logs DROP CONSTRAINT PK_opt_logs

CREATE CLUSTERED INDEX CI_opt_logs ON dbo.opt_logs ( datetime ) WITH ( FILLFACTOR = 99, ONLINE = ON /*if Enterprise Edition*/ ) ON [PRIMARY]
0
 
LVL 1

Author Comment

by:Tom_Hickerson
Comment Utility
There is a vb app that access the database by id.  The date time is used by the vb app.

So do I run the snippet that scott suggests?
0
 
LVL 69

Expert Comment

by:ScottPletcher
Comment Utility
If you actually do lookups by id, you could add a nonclustered index on id also:


ALTER TABLE opt_logs DROP CONSTRAINT PK_opt_logs

CREATE CLUSTERED INDEX CI_opt_logs ON dbo.opt_logs ( datetime ) WITH ( FILLFACTOR = 99, ONLINE = ON /*if Enterprise Edition*/ ) ON [PRIMARY]

CREATE UNIQUE NONCLUSTERED INDEX CX_opt_logs ON dbo.opt_logs ( id ) WITH ( FILLFACTOR = 99, ONLINE = ON /*if Enterprise Edition*/ ) ON [PRIMARY]
0
 
LVL 1

Author Comment

by:Tom_Hickerson
Comment Utility
Scott,  I couldn't run what you had posted.  I got these errors.

Msg 1712, Level 16, State 1, Line 3
Online index operations can only be performed in Enterprise edition of SQL Server.
Msg 1712, Level 16, State 1, Line 5
Online index operations can only be performed in Enterprise edition of SQL Server.
0
IT, Stop Being Called Into Every Meeting

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

 
LVL 33

Expert Comment

by:paulmacd
Comment Utility
Thus his caveat about /*if Enterprise Edition*/.
0
 
LVL 69

Expert Comment

by:ScottPletcher
Comment Utility
Sorry about that.  If it's not Enterprise Edition, we need to remove the ONLINE = ON option:


ALTER TABLE opt_logs DROP CONSTRAINT PK_opt_logs

CREATE CLUSTERED INDEX CI_opt_logs ON dbo.opt_logs ( datetime ) WITH ( FILLFACTOR = 99 ) ON [PRIMARY]

CREATE UNIQUE NONCLUSTERED INDEX CX_opt_logs ON dbo.opt_logs ( id ) WITH ( FILLFACTOR = 99 ) ON [PRIMARY]
0
 
LVL 1

Author Comment

by:Tom_Hickerson
Comment Utility
More errors...
Msg 3725, Level 16, State 0, Line 1
The constraint 'PK_opt_logs' is being referenced by table 'opt_solutions', foreign key constraint 'FK_opt_solutions_opt_logs'.
Msg 3727, Level 16, State 0, Line 1
Could not drop constraint. See previous errors.
0
 
LVL 69

Expert Comment

by:ScottPletcher
Comment Utility
Rats, FK(s) will be a real pain here.

You'll either have to live with the poor/bad performance of the table clustered as it currently is, or:

1) Script out all related FKs.
2) Temporarily drop the FKs.
3) Run the commands above to: drop the PK, create the new clus index, recreate the PK.
4) Recreate all the FKs.

This is a common situation, and there's no easy solution once the wrong clus index is in place.
0
 
LVL 1

Author Comment

by:Tom_Hickerson
Comment Utility
What about deleting the old info without choking down sql?  Is there something I could do here besides going in and deleting 500 rows at a time?
0
 
LVL 69

Accepted Solution

by:
ScottPletcher earned 500 total points
Comment Utility
OOPS, good point.  Here's a script to do that, 1000 rows at a time -- naturally adjust that number to whatever works best for you:



USE [<your_db_name>]

DECLARE @total_rows_in_table int
DECLARE @rows_deleted int
DECLARE @total_rows_deleted int
SELECT @total_rows_in_table = COUNT(*)
FROM dbo.opt_logs WITH (NOLOCK)
SET @rows_deleted = 0
SET @total_rows_deleted = 0

WHILE 1 = 1
BEGIN
    DELETE FROM ol
    FROM dbo.opt_logs ol
    INNER JOIN (
        SELECT TOP (1000) id
        FROM dbo.opt_logs WITH (NOLOCK)
        ORDER BY id
    ) AS ol_top ON
        ol_top.id = ol.id
    SET @rows_deleted = @@ROWCOUNT
    SET @total_rows_deleted = @total_rows_deleted + @rows_deleted
    -- keep DELETEing until the table reaches 20000 or fewer rows
    IF @total_rows_in_table - @total_rows_deleted <= 20000
        BREAK
    IF @total_rows_deleted % 100000 = 0
    BEGIN
        CHECKPOINT
    END --IF
    WAITFOR DELAY '00:00:00.150'
END --WHILE

PRINT 'Total Rows Deleted = ' + CAST(@total_rows_deleted AS varchar(10))
0
 
LVL 75

Expert Comment

by:Anthony Perkins
Comment Utility
FK(s) will be a real pain here.
The author could also use the SSMS table designer to do this and it should take care of all the Foreign Keys.
0
 
LVL 1

Author Closing Comment

by:Tom_Hickerson
Comment Utility
Awesome.  It only took 30 seconds to delete all the extra rows.

Thanks!
0
 
LVL 69

Expert Comment

by:ScottPletcher
Comment Utility
You're welcome!

The key tip here is to use the clustered index whenever possible, because that results in the least I/O for SQL (and SQL has a natural bias to using the clustered index anyway).
0

Featured Post

Comprehensive Backup Solutions for Microsoft

Acronis protects the complete Microsoft technology stack: Windows Server, Windows PC, laptop and Surface data; Microsoft business applications; Microsoft Hyper-V; Azure VMs; Microsoft Windows Server 2016; Microsoft Exchange 2016 and SQL Server 2016.

Join & Write a Comment

In this article I will describe the Copy Database Wizard method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
Use this article to create a batch file to backup a Microsoft SQL Server database to a Windows folder.  The folder can be on the local hard drive or on a network share.  This batch file will query the SQL server to get the current date & time and wi…
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
Viewers will learn how to use the SELECT statement in SQL to return specific rows and columns, with various degrees of sorting and limits in place.

743 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now