2

我需要定期清理 SQL Server 表,但我的解决方案花费的时间非常长(73,000 条记录大约需要 12 分钟)。

我的表有 4 个字段:

id1
id2
val1
val2

对于具有相同“id1”的每组记录,我需要保留第一个(最低 id2)和最后一个(最高 id2)并删除其中的所有内容,除非 val1 或 val2 已从上一个(下一个最低的“id2”)记录更改.

如果到目前为止您一直在关注我,那么更有效的算法是什么?这是我的java代码:

boolean bDEL=false;
qps = conn.prepareStatement("SELECT id1, id2, val1, val2 from STATUS_DATA ORDER BY id1, id2");
qrs = qps.executeQuery();
//KEEP FIRST & LAST, DISCARD EVERYTHING ELSE *EXCEPT* WHERE CHANGE IN val1 or val2
while (qrs.next()) {
    thisID1 = qrs.getInt("id1");
    thisID2 = qrs.getInt("id2");
    thisVAL1= qrs.getInt("val1");
    thisVAL2= qrs.getDouble("val2");
    if (thisID1==lastID1) {     
        if (bDEL) {             //Ensures this is not the last record
            qps2 = conn2.prepareStatement("DELETE FROM STATUS_DATA where id1="+lastID1+" and id2="+lastID2);
            qps2.executeUpdate();
            qps2.close();
            bDEL = false;
        }
        if (thisVAL1==lastVAL1 && thisVAL2==lastVAL2) {
            bDEL = true;
        }
    } else if (bDEL) bDEL=false;
    lastID1 = thisID1;
    lastID2 = thisID2;
    lastVAL1= thisVAL1;
    lastVAL2= thisVAL2;
}

更新 2015 年 4 月 20 日上午 11:10

好的,这是我的最终解决方案 - 对于每条记录,Java 代码将 XML 记录输入到字符串中,该字符串每 10,000 条记录写入文件,然后 java 调用 SQL Server 上的存储过程并将文件名传递给读取。如果使用动态 SQL 执行 openrowset,存储过程只能使用文件名作为变量。我将玩弄程序执行的间隔,但到目前为止我的性能结果如下:

BEFORE(一次删除 1 条记录):
处理 73,000 条记录,每秒 101 条记录

AFTER(批量 XML 导入):
处理 140 万条记录,每秒 5800 条记录

JAVA片段:

String ts, sXML = "<DataRecords>\n";
boolean bDEL=false;
qps = conn.prepareStatement("SELECT id1, id2, val1, val2 from STATUS_DATA ORDER BY id1, id2");
qrs = qps.executeQuery();
//KEEP FIRST & LAST, DISCARD EVERYTHING ELSE *EXCEPT* WHERE CHANGE IN val1 or val2
while (qrs.next()) {
    thisID1 = qrs.getInt("id1");
    thisID2 = qrs.getInt("id2");
    thisVAL1= qrs.getInt("val1");
    thisVAL2= qrs.getDouble("val2");
        if (bDEL && thisID1==lastID1) {                             //Ensures this is not the first or last record
            sXML += "<nxtrec id1=\""+lastID1+"\" id2=\""+lastID2+"\"/>\n";
            if ((i + 1) % 10000 == 0) {                                 //Execute every 10000 records
                sXML += "</DataRecords>\n";                             //Close off Parent Tag
                ts = String.valueOf((new java.util.Date()).getTime());  //Each XML File Uniquely Named
                writeFile(sDir, "ds"+ts+".xml", sXML);                  //Write XML to file

                conn2=dataSource.getConnection();
                cs = conn2.prepareCall("EXEC SCRUB_DATA ?");
                cs.setString(1, sdir + "ds"+ts+".xml");
                cs.executeUpdate();                                     //Execute Stored Procedure
                cs.close(); conn2.close();
                deleteFile(SHMdirdata, "ds"+ts+".xml");                 //Delete File

                sXML = "<DataRecords>\n";
            }
            bDEL = false;
        }
        if (thisID1==lastID1 && thisVAL1==lastVAL1 && thisVAL2==lastVAL2) {
            bDEL = true;
        } else if (bDEL) bDEL=false;
    } else if (bDEL) bDEL=false;
    lastID1 = thisID1;
    lastID2 = thisID2;
    lastVAL1= thisVAL1;
    lastVAL2= thisVAL2;
    i++;
}
qrs.close(); qps.close(); conn.close();

sXML += "</DataRecords>\n";
ts = String.valueOf((new java.util.Date()).getTime());
writeFile(sdir, "ds"+ts+".xml", sXML);

conn2=dataSource.getConnection();
cs = conn2.prepareCall("EXEC SCRUB_DATA ?");
cs.setString(1, sdir + "ds"+ts+".xml");
cs.executeUpdate();     
cs.close(); conn2.close();
deleteFile(SHMdirdata, "ds"+ts+".xml");

XML 文件输出:

<DataRecords>
<nxtrec id1="100" id2="1112"/>
<nxtrec id1="100" id2="1113"/>
<nxtrec id1="100" id2="1117"/>
<nxtrec id1="102" id2="1114"/>
...
<nxtrec id1="838" id2="1112"/>
</DataRecords>

SQL SERVER 存储过程:

PROCEDURE [dbo].[SCRUB_DATA] @floc varchar(100)     -- File Location (dir + filename) as only parameter 

BEGIN
        SET NOCOUNT ON;

        DECLARE @sql as varchar(max);
        SET @sql = '
                DECLARE @XmlFile XML

                SELECT @XmlFile = BulkColumn 
                FROM  OPENROWSET(BULK ''' + @floc + ''', SINGLE_BLOB) x;

                CREATE TABLE #TEMP_TABLE (id1 INT, id2 INT);

                INSERT INTO #TEMP_TABLE (id1, id2)  
                SELECT
                        id1 = DataTab.value(''@id1'', ''int''),
                        id2 = DataTab.value(''@id2'', ''int'')
                FROM
                        @XmlFile.nodes(''/DataRecords/nxtrec'') AS XTbl(DataTab);

                delete from D
                from STATUS_DATA D
                inner join #TEMP_TABLE T on ( (T.id1 = D.id1) and (T.id2 = D.id2) );    
        ';
    EXEC (@sql);    
END
4

3 回答 3

1

几乎可以肯定的是,您的性能问题不在于您的算法,而在于实现。例如,您的清理步骤必须删除 10,000 条记录,这意味着您将有 10000 次往返数据库服务器。

与其这样做,不如将要删除的每个 id 对写入一个 XML 文件,然后将该 XML 文件发送到 SQL Server 存储过程,该过程将 XML 分解为相应的 temp 或 temp_var 表。然后使用一次删除(或等效)删除所有 10K 行。

如果您不知道如何在 TSQL 中分解 xml,那么值得花时间学习。看一个简单的例子来帮助你开始,看看几个“tsql shred xml”的搜索结果就可以开始了。

添加

将 10K 记录拉到客户端应该小于 1 秒。您的 Java 代码也是如此。如果您没有时间按照建议学习使用 XML,您可以编写一个快速的脏存储过程,它接受 10(20、50?)对 id 并从存储过程中删除相应的记录。我经常使用 XML 方法从客户端“批量”处理内容。如果您的批次是“大”的,您可以看看在 SQL Server 上使用 BULK INSERT 命令——但是 XML 很简单,而且更灵活,因为它可以包含嵌套的数据结构。例如,主/细节关系。

添加

我只是在本地做的

create table #tmp
(
  id int not null
  primary key(id)
)
GO
insert #tmp (id)
  select 4
union
  select 5
GO

-- now has two rows #tmp

delete from L
from TaskList L
inner join #tmp T on (T.id = L.taskID)

(2 row(s) affected)

-- and they are no longer in TaskList

即,除非您以某种方式做错了,否则这应该不是问题。您是否正在创建临时表,然后尝试在不同的数据库连接/会话中使用它。如果会话不同,则在第二个会话中不会看到临时表。

很难想出另一种方式来让这件事在我的脑海中出现错误。

于 2014-04-14T20:08:20.270 回答
0

Have you considered doing something that pushes more of the calculating to SQL instead of java?

This is ugly and doesn't take into account your "value changing" part, but it could be a lot faster:

(This deletes everything except the highest and lowest id2 for each id1)

select * into #temp 
FROM (SELECT ROW_NUMBER() OVER (PARTITION BY id1 ORDER BY id2) AS 'RowNo', 
* from myTable)x 


delete from myTable i
    left outer join
    (select t.* from #temp t
        left outer join (select id1, max(rowNo) rowNo from #temp group by id1) x 
        on x.id1 = t.id1 and x.rowNo = t.RowNo
    where t.RowNo != 1 and x.rowNo is null)z 
    on z.id2 = i.id2 and z.id1 = i.id1
where z.id1 is not null
于 2014-04-14T21:01:22.703 回答
0

永远不要低估 SQL 的力量 =)

虽然我理解这似乎更“直接”地以逐行方式实现,但“基于集合”的做法会让它飞起来。

创建测试数据的一些代码:

SET NOCOUNT ON


IF OBJECT_ID('mySTATUS_DATA') IS NOT NULL DROP TABLE mySTATUS_DATA 
GO

CREATE TABLE mySTATUS_DATA (id1 int NOT NULL,
                          id2 int NOT NULL PRIMARY KEY (id1, id2),
                          val1 varchar(100) NOT NULL,
                          val2 varchar(100) NOT NULL)

GO
DECLARE @counter int,
        @id1     int,
        @id2     int,
        @val1    varchar(100),
        @val2    varchar(100)

SELECT @counter = 100000,
       @id1     = 1,
       @id2     = 1,
       @val1    = 'abc',
       @val2    = '123456'

BEGIN TRANSACTION

WHILE @counter > 0
    BEGIN
        INSERT mySTATUS_DATA (id1, id2, val1, val2)
                    VALUES (@id1, @id2, @val1, @val2)

        SELECT @counter = @counter - 1
        SELECT @id2 = @id2 + 1
        SELECT @id1 = @id1 + 1, @id2 = 1 WHERE Rand() > 0.8
        SELECT @val1 = SubString(convert(varchar(100), NewID()), 0, 9) WHERE Rand() > 0.90
        SELECT @val2 = SubString(convert(varchar(100), NewID()), 0, 9) WHERE Rand() > 0.90

        if @counter % 1000 = 0
            BEGIN
                COMMIT TRANSACTION
                BEGIN TRANSACTION
            END

    END

COMMIT TRANSACTION

SELECT top 1000 * FROM mySTATUS_DATA
SELECT COUNT(*) FROM mySTATUS_DATA

这里是进行实际清理的代码。请注意,为什么列仅用于教育目的。如果您要将其投入生产,我建议您将其放入评论中,因为它只会减慢操作速度。此外,您可以将 val1 和 val2 的检查组合在 1 次更新中……事实上,通过一些努力,您可能可以将所有内容组合到 1 条 DELETE 语句中。然而,我非常怀疑它会让事情变得更快......但它肯定会让事情变得不那么可读。无论如何,当我在笔记本电脑上运行 10 万条记录时,只需要 5 秒,所以我怀疑性能是否会成为问题。

IF OBJECT_ID('tempdb..#working') IS NOT NULL DROP TABLE #working
GO

-- create copy of table
SELECT id1, id2, id2_seqnr = ROW_NUMBER() OVER (PARTITION BY id1 ORDER BY id2),
       val1, val2,
       keep_this_record = Convert(bit, 0),
       why = Convert(varchar(500), NULL)
  INTO #working
  FROM STATUS_DATA
 WHERE 1 = 2

-- load records
INSERT #working (id1, id2, id2_seqnr, val1, val2, keep_this_record, why)
SELECT id1, id2, id2_seqnr = ROW_NUMBER() OVER (PARTITION BY id1 ORDER BY id2),
       val1, val2,
       keep_this_record = Convert(bit, 0),
       why = ''
  FROM STATUS_DATA

-- index
CREATE UNIQUE CLUSTERED INDEX uq0 ON #working (id1, id2_seqnr)

-- make sure we keep the first record of each id1
UPDATE upd
   SET keep_this_record = 1,
       why = upd.why + 'first id2 for id1 = ' + Convert(varchar, id1) + ','
  FROM #working upd
 WHERE id2_seqnr = 1 -- first in sequence

-- make sure we keep the last record of each id1
UPDATE #working
   SET keep_this_record = 1,
       why = upd.why + 'last id2 for id1 = ' + Convert(varchar, upd.id1) + ','
  FROM #working upd
  JOIN (SELECT id1, max_seqnr = MAX(id2_seqnr)
          FROM #working
         GROUP BY id1) mx
    ON upd.id1 = mx.id1
   AND upd.id2_seqnr = mx.max_seqnr

-- check if val1 has changed versus the previous record
UPDATE upd
   SET keep_this_record = 1,
       why = upd.why + 'val1 for ' + Convert(varchar, upd.id1) + '/' + Convert(varchar, upd.id2) + ' differs from val1 for ' + Convert(varchar, prev.id1) + '/' + Convert(varchar, prev.id2) + ','
  FROM #working upd
  JOIN #working prev
    ON prev.id1 = upd.id1
   AND prev.id2_seqnr = upd.id2_seqnr - 1
   AND prev.val1 <> upd.val1 

-- check if val1 has changed versus the previous record
UPDATE upd
   SET keep_this_record = 1,
       why = upd.why + 'val2 for ' + Convert(varchar, upd.id1) + '/' + Convert(varchar, upd.id2) + ' differs from val2 for ' + Convert(varchar, prev.id1) + '/' + Convert(varchar, prev.id2) + ','
  FROM #working upd
  JOIN #working prev
    ON prev.id1 = upd.id1
   AND prev.id2_seqnr = upd.id2_seqnr - 1
   AND prev.val2 <> upd.val2

-- delete those records we do not want to keep
DELETE del
  FROM STATUS_DATA del
  JOIN #working w
    ON w.id1 = del.id1
   AND w.id2 = del.id2
   AND w.keep_this_record = 0

-- some info
SELECT TOP 500 * FROM #working ORDER BY id1, id2
SELECT TOP 500 * FROM STATUS_DATA ORDER BY id1, id2
于 2014-04-15T14:41:45.777 回答