Sql updating large number of rows speed dating in bergen county

posted by | Leave a comment

Note that I did not try any of these tests with compression enabled (possibly a future test!), and I left the log autogrow settings at the terrible defaults (10%) – partly out of laziness and partly because many environments out there have retained this awful setting.Many times it turns out that they were performing a large delete operation, such as purging or archiving data, in one large transaction.I wanted to run some tests to show the impact, on both duration and the transaction log, of performing the same data operation in chunks versus a single transaction.Population of the table and creation of the indexes took ~24 minutes.The table has 48.5 million rows and takes up 7.9 GB in disk (4.9 GB in data, and 2.9 GB in index).To perform the delete in a single statement, it took 42 seconds in full, and 43 seconds in simple. The next set of tests had a couple of surprises for me.

So next I had to determine what I wanted to test for greatest impact.

Since I was involved in a discussion with a co-worker just yesterday about deleting data in chunks, I chose deletes.

And since the clustered index on this table is on This would delete 456,960 rows (about 10% of the table), spread across many orders.

The large update has to be broken down to small batches, like 10,000, at a time. It is also easy to restart in case of interruption.

WAITFOR DELAY can be included to slow down the batch processing.

Leave a Reply

Online live chat with sexy women