8

假设我的更新查询如下所示:

UPDATE a SET 
    a.colSomething= 1
FROM tableA a WITH (NOLOCK)
INNER JOIN tableB b WITH (NOLOCK) 
        ON a.colA= b.colB
INNER JOIN tableC c WITH (NOLOCK) 
        ON c.colC= a.colA

假设上述连接到 tableB 和 tableC 需要几分钟才能完成。在表/行锁定方面,整个表在连接期间是否被锁定?还是 sql 编译器足够聪明,可以避免锁定整个表?

与上面的查询相比,在实际更新之前先将连接结果存储在临时表中,是否不太可能出现死锁,如下所示?

SELECT a, b, c
INTO    
FROM tableA 
INNER JOIN tableB b WITH (NOLOCK) 
    ON a.colA= b.colB
INNER JOIN tableC c WITH (NOLOCK) 
    ON c.colC= a.colA

UPDATE a SET a.colSomething=1 
FROM tableA a INNER JOIN #tmp t ON a.colA= t.colA

谢谢!

4

3 回答 3

5

Blocking vs. dead locking

I think you may be confusing locking and blocking with DEADLOCKS.

On any update query SQL server will lock the involved data. While this lock is active, other processes will be blocked ( delayed ) from editing the data. If the original update takes a long time ( from a users perspective' like a few seconds ) then front end system may seem to 'hang' or even timeout a users front end process and report an error.

This is not a deadlock. This blocking will resolve itself, basically non destructively by either delaying the user slightly or in some cases by forcing front end to be smart about the timeout. In the problem is blocking because of long running updates, you could fix the users having to resubmit by increasing the front end timeout.

A deadlock however cannot be resolved no matter how much you increase the timeout. One or the processes will be terminated with prejudice ( losing the update ).

Deadlocks have different root causes than blocking. Deadlocks are usually caused by inconsistent sequential logic in the front end, which accesses and locks data from two tables in different orders in two different parts of the front end. When these two parts operate concurrently in a multiuser environment they may basically, non deterministically , cause deadlocks, and essentially unsolvable data loss ( until the cause of the deadlocks is resolved ) as opposed to blocking which can usually be dealt with.

Managing blocking

Will SQL server choose row locks or whole table lock?

Generally , it depends and could be different each time. Depending on how many rows the query optimizer determines will be affected, the lock may be row or table. If its over a certain threshold, it will go table because it will be faster.

How can I reduce blocking while adhering to the basic tenets of transactional integrity?

SQL server is going to attempt to lock the tables you are joining to because their contents is material to generating the result set that gets updated. You should be able to show an estimated execution plan for the update to see what will be locked based on today's size of the tables. If the predicted lock is table, you can override perhaps with row lock hint, but this does not guarantee no blocking. It may reduce chance of inadvertent blocking of possibly unrelated data in the table. You will essentially always get blocking of data directly material to the update.

Keep in mind, however;

Also keep in mind the locks taken on the joined table will be Shared locks. Meaning other processes can still read from those tables, they just can't update them, until YOUR update is done using them as reference. In contrast, other processes will actively block on trying to simply READ data that you update has an exclusive lock on ( the main table being updated ).

So, joined table can still be read. Data to be updated will be exclusively locked as a group of records until the updates is complete or fails and is rolled back as a group.

于 2013-07-18T23:09:01.933 回答
0

我会把索引放在你的外键上,它可以加快更新和删除操作,并缓解你的死锁情况。

于 2013-07-18T21:30:05.877 回答
0

我在尝试更新具有 800K 记录的表时遇到了完全相同的问题,该表连接到具有 10 个连接条件的另一个表。此更新耗时 30 分钟以上。

我通过创建一个仅包含需要更新的行的临时表将其缩短到 8 秒。然后我用这些结果更新了第一个表,只需要更新 20,000 个实际行。默认情况下, select 命令不会被记录,我相信(但我不确定),当您使用 SELECT INTO 创建临时表时,它也不会被记录(请有人确认)。

当对与另一个大表连接的大表发出更新时,您会将每个更新的字段记录到事务日志中,然后搜索下一个可用的候选者,然后再次记录更改并进行搜索。如果您可以接受脏读,然后创建一个仅包含将要更新的记录的临时表,您将大大减少更新时间,从而减少死锁的机会。

删除 where 中的任何函数也很重要,例如 ISNULL 甚至计算的字符串比较。这些可以从根本上增加您的更新时间。

于 2018-07-19T20:23:36.357 回答