Solved

Delete top rows from SQL Server Everywhere

Posted on 2006-10-26
10
502 Views
Last Modified: 2008-01-16
I use a database table as a log. For simplicitys sake here's a scaled down version:
CREATE TABLE Log (
Date datetime NOT NULL PRIMARY KEY,
Title nvarchar(255) NOT NULL
)

Problem: I don't want the log to contain more than 11.000 items. When log goes over that, it should delete 1.000 rows from the top.

Question: How do I delete the first 1.000 rows (oldest entries) from this table in SQL Server Everywhere?

Limitations: SQL Server Everywhere does not suppport triggers, stored procedures, TOP n or SET ROWCOUNT.

Thanks.
0
Comment
Question by:halligang
  • 4
  • 2
  • 2
  • +2
10 Comments
 
LVL 17

Expert Comment

by:HuyBD
ID: 17810583
You can add the below query after insert new record or schedule in your application
delete from Log  where Date not in (select top 11000 Date from Log  order by Date desc)
0
 
LVL 23

Expert Comment

by:adathelad
ID: 17810597
Hi

>> Limitations: SQL Server Everywhere does not suppport triggers, stored procedures, TOP n or SET ROWCOUNT
Makes it tricky :)

I'd suggest an alternative, whereby you delete rows that are over x days/weeks/months/years old.
This would allow you to use something like:
DELETE FROM Log WHERE Date < (DATEADD(m, -6, GETDATE()))
(This example deletes things over 6 months old).

You could then schedule this to run as a job every day.

0
 

Author Comment

by:halligang
ID: 17810847
HuyBD: thanks, but TOP is not a keyword in Everywhere edition

adathelad: thanks, but size is an issue, since some days may add a significant amout of log entries while others may produce none


A solution I've been thinking about, but didn't quite complete (since stored procedures is not an option):

Add an Identity column, Id

Then, at regular intervals make three calls to the database
SELECT MIN(Id) from Log
SELECT MAX(Id) from Log

In my program code:
int tableRows = max - min;
if (tableRows > 11000)
{
int idLimit = max - 10000;
(create query:
DELETE FROM Log WHERE Id < idLimit;
}

This may be a hint for a solution. Two problems with the above:
1. Id column will eventually wrap. How can this be handled? A solution would be to copy all contents to a new table which effectively reset the value, but will be somewhat inefficient.
2. Failed inserts will cause the identity column to skip over several values.

Thanks.
0
Enterprise Mobility and BYOD For Dummies

Like “For Dummies” books, you can read this in whatever order you choose and learn about mobility and BYOD; and how to put a competitive mobile infrastructure in place. Developed for SMBs and large enterprises alike, you will find helpful use cases, planning, and implementation.

 
LVL 1

Expert Comment

by:b2mje112
ID: 17811180
1.  select count(*) from Log
2.  Not sure how to do it if ID wraps.
0
 
LVL 69

Accepted Solution

by:
Scott Pletcher earned 500 total points
ID: 17812461
You will need the id column, so go ahead and add it.  You will also need a work table (temp table would be best if available) with an ident column, like so:

CREATE TABLE workTable (  --or #workTable
    ident INT IDENTITY(1,1),
    id INT  --from main table  
    )

Then:

IF (SELECT COUNT(datetime) FROM tableName >= 11000)
BEGIN --{
    TRUNCATE TABLE workTable  --or DELETE FROM
    INSERT INTO workTable (id)
    SELECT id
    FROM tableName
    ORDER BY datetime
    DELETE FROM tableName
    WHERE id IN (SELECT id FROM tableName WHERE ident <= 1000)
END --}
0
 
LVL 69

Expert Comment

by:Scott Pletcher
ID: 17812474
Almost forgot, you should add an index on (datetime, id) on tableName.
0
 
LVL 17

Expert Comment

by:HuyBD
ID: 17816685
get something from ScottPletcher' idea

create table #tmp(ID int identity(1,1), date int,title varchar)
insert into #tmp(date,title) select * from Log
delete  from #tmp where id<1100
delete  from Log
insert into Log select date,title from #tmp
drop table #tmp

//you can check record count

create table #tmp(ID int identity(1,1), date int,title varchar)
insert into #tmp(date,title) select * from Log
if exists (select 1 from #tmp where id>1100)
 begin
   delete  from #tmp where id<1100
   delete  from Log
   insert into Log select date,title from #tmp
 end
drop table #tmp
0
 
LVL 69

Expert Comment

by:Scott Pletcher
ID: 17906461
Thanks!  But, D'OH, I left one thing out:


IF (SELECT COUNT(datetime) FROM tableName >= 11000)
BEGIN --{
    TRUNCATE TABLE workTable  --or DELETE FROM
    INSERT INTO workTable (id)
    SELECT TOP 1000 id     --<<--
    FROM tableName
    ORDER BY datetime
    DELETE FROM tableName
    WHERE id IN (SELECT id FROM tableName WHERE ident <= 1000)
END --}
0
 

Author Comment

by:halligang
ID: 17913784
Thanks. I built on your solution. But the final comment didn't follow the rules... TOP is not allowed on SQL Server Everywhere... :-D
0
 
LVL 69

Expert Comment

by:Scott Pletcher
ID: 17915786
OK; technically it's not needed -- it would just be more efficient to insert only the first 1000 instead of all of them.
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Having an SQL database can be a big investment for a small company. Hardware, setup and of course, the price of software all add up to a big bill that some companies may not be able to absorb.  Luckily, there is a free version SQL Express, but does …
Let's review the features of new SQL Server 2012 (Denali CTP3). It listed as below: PERCENT_RANK(): PERCENT_RANK() function will returns the percentage value of rank of the values among its group. PERCENT_RANK() function value always in be…
Using examples as well as descriptions, and references to Books Online, show the different Recovery Models available in SQL Server and explain, as well as show how full, differential and transaction log backups are performed
Via a live example, show how to shrink a transaction log file down to a reasonable size.

789 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question