We are using full recovery model in SQL Server. We have a job which merges from a staging table to the final table. The staging table is holding millions of rows. The final table is also huge with millions of rows. We are merging in batches of 10,000 rows.
The pseudo code is given for a single batch below:
BEGIN TRANSACTION
DELETE TOP 10000 *
FROM <Staging Table>
OUTPUT deleted.* INTO @TableVariable
MERGE INTO <Final Table>
USING @TableVariable
COMMIT TRANSACTION
The problem is, the batch operation is getting slower, for every new batch. When we restart the server, the batches are getting faster again. The transactions are not getting written to disk also and taking very long time to insert to disk. We suspect it to be problem with transaction log. When we reduce the batch size, more transactions are happening and batches are slowing down even more.
Is there a way to improve the performance of this kind of batched delete & merge operation? Do you recommend to use CHECKPOINT
to force in full recovery model?