The event logs for my .NET application show that it occasionally deadlocks while reading from Sql Server. This is usually very rare as we have already optimized our queries to avoid deadlocks, but they sometimes still occur. In the past, we've had some deadlocks that occur while calling the ExecuteReader
function on our SqlCommand
instance. To fix this we'v开发者_如何学编程e added retry code to simply run the query again, like this:
//try to run the query five times if a deadlock happends
int DeadLockRetry = 5;
while (DeadLockRetry > 0)
{
try
{
return dbCommand.ExecuteReader();
}
catch (SqlException err)
{
//throw an exception if the error is not a deadlock or the deadlock still exists after 5 tries
if (err.Number != 1205 || --DeadLockRetry == 0)
throw;
}
}
This worked very well for the case where the deadlock happened during the initial query execute, but now we are getting deadlocks while iterating through the results using the Read()
function on the returned SqlDataReader
.
Again, I'm not concerned about optimizing the query, but merely trying to recover in the rare case that a deadlock occurs. I was thinking about using a similar retry process. I could create my own class that inherits from SqlDataReader
which simply overrides the Read
function with retry code. Like this:
public class MyDataReader : SqlDataReader
{
public override bool Read()
{
int DeadLockRetry = 5;
while (DeadLockRetry > 0)
{
try
{
return base.Read();
}
catch (SqlException ex)
{
if (ex.ErrorCode != 1205 || --DeadLockRetry == 0)
throw;
}
}
return false;
}
}
Is this the right approach? I want to be sure that records aren't skipped in the reader. Will retrying Read
after a deadlock skip any rows? Also, should I call Thread.Sleep
between retries to give the database time to get out of the deadlock state, or is this enough. This case is not easy to reproduce so I'd like to be sure about it before I modify any of my code.
EDIT:
As requested, some more info about my situation: In one case, I have a process that performs a query that loads a list of record ids that need to be updated. Then I iterate through that list of ids using the Read
function and run an update process on that record, which will eventually update a value for that record in the database. (No, there is no way to perform the update in the initial query, many other things occur for each record that is returned). This code has been working fine for a while, but we are running quite a bit of code for each record, so I can imagine one of those processes is creating a lock on the initial table being read through.
After some thought, Scottie's suggestion to use a data structure to store the results would likely fix this situation. I could store the ids returned in a List<int>
and loop through that. That way the locks on the rows can be removed right away.
However, I would still be interested in knowing if there is a general way to recover from deadlocks on a read.
Your entire transaction is lost in a deadlock. You have to restart from scratch, from way above the level where you read a data reader. You say that you read some records, then update them in a loop. You have to restart with reading again the records:
function DoWork() {
using (TransactionScope scope = new TransactionScope(...)) {
cmd = new SqlCommand("select ...");
using (DataReader rdr = cmd.ExecuteReader ()) {
while(rdr.Read()) {
... process each record
}
}
scope.Complete ();
}
}
you have to retry the entire DoWork
call:
retries = 0;
success = false;
do {
try {
DoWork ();
success = true;
}
catch (SqlException e) {
if (can retry e) {
++retries;
}
else {
throw;
}
}
} while (!success);
Could you possibly consider using DataSets or DataTables instead of DataReaders?
My fear here, although I'm not positive, is that the read throwing the error will discard that record altogether. You might want to test this in a controlled environment where you can force an error and see if it re-reads the record or if it just discards it.
精彩评论