I'm trying to use rowlock to prevent certain rows being updated while it's running but the problem is that I can't use the tables at all while running this:
set transaction isolation level repeatable read;
go
begin try
begin transaction;
insert into tableB with(rowlock)
select * from tableA with(rowlock) where status = 1
commit transaction;
end try
begin catch
i开发者_如何学编程f xact_state() <> 0
begin
rollback transaction;
end
end catch;
I ran an insert for both tables while this transaction is running and they both waited for it to commit. What should I do to lock certain rows for read and delete but keep the tables usable?
For the record, the select results tens of thousands of records. The tables have clustered primary keys and status column has an index.
I wouldn't recommend attempting to individually lock certain rows. I think you're going to run into more problems than you're trying to solve. The reason you're running into the locking problem you described is because of the number of rows being inserted, compared to the number of rows in your table. Locks escalate from row locks to page locks to table locks, at the discretion of SQL Server.
I think you will be much better off inserting rows in much smaller transaction batches to reduce the amount of locking going on. In other words, trying inserted 100 rows at a time, in a transaction, to see if you get the type of locking performance you find acceptable.
精彩评论