Service operation MyMethod(int id)
retrieves particular row ( based on an id
parameter ) from a single DB table and just before it returns it also saves that state back to the table. If two calls ( first call happens within transaction T1 while second within transaction T2 ) to MyMethod()
are made at the same time, then service will try to execute the two calls concurrently. Since both T1 and T2 try to access same DB table, one of the two transaction will be granted access to the resource, while other should get blocked until the original transaction commits or aborts. But instead I get an exception Transaction ( Process ID 54 ) was deadlocked on lock resources with another process and has been chosen as the deadlock victim
I don’t understand reasoning behind throwing deadlock exception since as far as I can tell there isn’t any danger of a deadlock. For one thing, the two transactions accessed and operated on different rows. Why wasn’t instead the DB resource just locked until original transaction committed or aborted?!
Here is the code:
[ServiceContract]
public interface IService
{
[OperationContract]
[TransactionFlow(TransactionFlowOption.Allowed)]
void Process(int id);
}
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall, IncludeExceptionDetailInFaults=true)]
public class Service : IService
{
string state_Data = "";
[OperationBehavior(TransactionScopeRequired = true)]
public void Process(int id)
{
GetState(id);
Thread.Sleep(6000);
SaveState(id);
}
private void GetState(int id)
{
using (SqlConnection con = new SqlConnection())
{
con.ConnectionString = "data source=localhost; initial catalog=WCF; integrated security=sspi;";
SqlCommand cmd = new SqlCommand();
cmd.CommandText = "SELECT * FROM StateTable WHERE id = @id";
cmd.Parameters.Add("@id", SqlDbType.Int).Value = id;
cmd.Connection = con;
con.Open();
开发者_运维问答 SqlDataReader reader = cmd.ExecuteReader();
if (reader.Read())
state_Data = reader["State"].ToString();
}
}
private bool SaveState(int id)
{
using (SqlConnection con = new SqlConnection())
{
con.ConnectionString = "data source = localhost; initial catalog=WCF; integrated security=sspi;";
SqlCommand cmd = new SqlCommand();
cmd.CommandText = "UPDATE StateTable SET State=@State WHERE Id = @id";
cmd.Parameters.Add("@id", SqlDbType.Int).Value = id;
cmd.Parameters.Add("@State", SqlDbType.NVarChar).Value = state_Data;
cmd.Connection = con;
con.Open();
int ret = cmd.ExecuteNonQuery();
return ret == 1;
}
}
}
EDIT:
In case this will help, here is client code:
FIRST CLIENT:
ServiceClient proxy = new ServiceClient("WSDualHttpBinding_IService");
using (TransactionScope scope = new TransactionScope())
{
proxy.Process(1);
scope.Complete();
}
SECOND CLIENT:
ServiceClient proxy = new ServiceClient("WSDualHttpBinding_IService");
using (TransactionScope scope = new TransactionScope())
{
proxy.Process(2);
scope.Complete();
}
Thank you
Actually this code is a guaranteed deadlock. There can be any number of successful GetState
calls on the same ID
, all succeeding, since they all acquire (and retain, due to the serializable transaction scope) shared locks and therefore are compatible. Any subsequent attempt to SaveState
will block, because of the multitude of shared locks are all incompatible with the X lock needed for the update. The next SaveState
will deadlock. 100% repro, guaranteed, every time.
You should use optimistic concurrency instead if you care about performance. If performance is irrelevant then the GetState
should lock the state exclusively, eg. by providing an XLOCK hint.
Of course, I assume there is a clustered index on ID
in StateTable.
精彩评论