I have a SQL/Trace, and it’s a requirement to each night delete all rows in the Trace over x days old. This is typically called a purge program, or mass delete. If you don’t limit the data, then you can knock the lights out of SQL.

Below is a modification to code I found here: http://dbadiaries.com/how-to-delete-millions-of-rows-using-t-sql-with-reduced-impact. That blog does a good job of explaining why we need to break the deletes into smaller chunks.

I added the “Wait” statement, which is optional. If you have other jobs running that might be impacted by your delete, you don’t want to block them. So you simply pause, give them a chance to get in and out, then continue with the deletes.

-- Technique to not cause SQL slowness by deleting large number of rows (such as millions of rows) 
-- at one time (without freeing locks) 
-- from http://dbadiaries.com/how-to-delete-millions-of-rows-using-t-sql-with-reduced-impact

declare @DaysToKeep int = 5
declare @RowsDeleted int = 1  -- must set > 0 so the loop will be entered the first time through 
declare @TotalRowsDeleted int = 0 
declare @StartDateTime datetime = DATEADD(d, -@DaysToKeep, getdate())

WHILE @RowsDeleted > 0
         DELETE TOP (1000) FROM Trace where  trcDateTime <= @StartDateTime 
         set @RowsDeleted = @@ROWCOUNT 
         print @RowsDeleted
         set @TotalRowsDeleted = @TotalRowsDeleted + @RowsDeleted 
         WAITFOR DELAY '00:00:01';  -- Give SQL a little break 
SELECT @StartDateTime as StartDateTimeOfDelete, 
       @TotalRowsDeleted as RowsDeleted, (select COUNT(*) from Trace) as RemainingRows, 
       MIN(trcDateTime) as Min, MAX(trcDateTime) as Max from Trace  

--select count(*) from trace where trcDateTime >= DATEADD(d, -15, getdate())
--declare @DaysToKeep int = 20
--select DATEADD(d, -@DaysToKeep, getdate())

Filed under: SQL