The usual advice when it comes to avoiding deadlocks is to always lock resources in the same order. But how would you implement this with regards to row locks in highly contented Oracle database?
To see what I mean, consider the following example. A very simple DAO to handle bank accounts:
@Component
public class AccountDao {
@Resource
private DataSource dataSource;
public void withdraw(String account, int amount) {
modifyBalance(account, -amount);
}
public void deposit(String account, int amount) {
modifyBalance(account, amount);
}
private void modifyBalance(String account, int amount) {
try {
Connection connection = DataSourceUtils.getConnection(dataSource);
PreparedStatement statement = connection
开发者_如何学编程 .prepareStatement("update account set balance = balance + ? where holder = ?");
statement.setInt(1, amount);
statement.setString(2, account);
statement.execute();
}
catch (SQLException e) {
throw new RuntimeException(e);
}
}
}
To execute a transfer between two accounts, there is some sort of InternalBankTransfer
class that has a transfer method:
public void transfer(String from, String to, int amount) {
// start transaction
accountDao.withDraw(from, amount);
accountDao.deposit(to, amount);
// commit transaction
}
Normally this works fine. But let's say that we have two people initiating transfers at the same time. Let's say that Anne wants to transfer 100 bucks to Bob at the same time that Bob wants to transfer 50 to Anne. So in one thread Anne calls transfer("Anne", "Bob", 100)
, and in another Bob calls transfer("Bob", "Anne", 50)
. This code is susceptible to dead locks, if the executing order is as follows:
T1: accountDao.withDraw("Anne", 100);
T2: accountDao.withDraw("Bob", 50);
T1: accountDao.deposit("Bob", 100);
T2: accountDao.deposit("Anne", 50); // BAM! ORA-00060: deadlock detected while waiting for resource
I admit that I hadn't considered this at all before I started to see dead locks in a real application. My naive view was that the transaction isolation sort of took care of this automatically. Oracle says that this is due to poor application design. But what is a good design in this case? Do I need to select for update
everything I plan to update? What if this is a huge transaction involving updates several tables? Should I design so that dead locks are impossible or just minimize them and accept that they are a fact of life?
I think it's a fact of life (and one that should really only happen with high concurrency and hotspot data).
If you wanted to implement lock ordering, then yes, you'd need to rewrite your code to lock or update the accounts in pre-determined order (first Anne, then Bob). But that won't be feasible with complex transactions. If it only happens with a few hotspot rows, maybe you can use lock ordering for just those (and leave the rest as is) and get by with that.
Or use less granular locks, but that will kill your concurrency.
In your case, you can just retry the aborted transaction. And if it happens too often, it does seem like you have some problem with your application design.
Here is a link for a two-phase commit protocol for bank account transfers. It is from the MongoDB wiki, i.e. from people who do not even have the luxury of row locks and transactions in the first place, but one could implement that on an RDBMS as well in order to avoid lock contention. That would of course be a rather radical application redesign. I'd try everything else first (retries, coarse locks, artificially reduced concurrency level, batch processing).
There are few issues with the design above.
Even though you ask about deadlocks, I feel the need to write also about other issues that are wrong IMHO and they might save you from some trouble in the future.
In your design, the first problem I see is the methods separation: in order to make a modification on the balance, you are having a method to withdraw and a method to deposit. In each, you are using calling the same method "modifyBalance" to do the action. and there are few problems in the way its done:
1- the modifyBalance method requests a connection each time its called 2- the connection will most probably have the auto commit mode on since you did not set auto commit to off.
why is this problematic? the logic you are doing is supposed to be a single unit. suppose you withdraw 50 from Bob and it succeeds. you have auto commit and the changes are final. now you try to make deposit to Anne and it fails. according to the code above, Anne will not get the 50 but Bob already lost them!!! so in that case you need to call deposit to bob again and return the 50 to him, hoping it won't fail or else ... infinite processing. therefore, these actions ought to be in the same transaction. either both withdrawing and depositing succeed and they get committed, or they fail and everything is rolled back.
it is also problematic since in auto commit mode, the commit happens after the statement completes or the next execute happens. if for any reason the commit did not happen, then since you are not closing the connection (and that is yet another problem as it does not get back to the pool) and no commit happens might lead to deadlock if another update is issued on the row locked in the first transaction.
therefore, I suggest you do the following: either request the connection in your transfer method , or unite the methods withdraw and deposit in the method modify balance itself.
Since it seems to me that you liked the Idea of having the two methods the way they are, i will demo usage of the first option I mentioned :)
public class AccountDao {
@Resource
private DataSource dataSource;
public void withdraw(String account, int amount,Connection connection) throws SQLException{
modifyBalance(account, -amount);
}
public void deposit(String account, int amount,Connection connection) throws SQLException{
modifyBalance(account, amount);
}
private void modifyBalance(String account, int amount,Connection connection) throws SQLException {
PreparedStatement statement = connection.prepareStatement("update account set balance = balance + ? where holder = ?");
statement.setInt(1, amount);
statement.setString(2, account);
statement.execute();
}
}
and the transfer method becomes:
public void transfer(String from, String to, int amount) {
try {
Connection connection = DataSourceUtils.getConnection(dataSource);
connection.setAutoCommit(false);
accountDao.withDraw(from, amount,connection);
accountDao.deposit(to, amount,connection);
}
catch (SQLException e) {
if (connection!=null)
connection.rollback();
throw new RuntimeException(e);
}
finally {
if (connection!=null){
connection.commit();
connection.close();
}
}
}
Now either both actions succeed or both get rolled back. Also, when an update is issued on a row, other transactions trying to update the row will wait for it to finish before they can continue. rolling back or committing insures the release of the row level lock.
Now the above is explanation of better design in order to keep the logical actions and the correctness of data. but it won't solve your lock issues !!!! here is an example of what might happen:
suppose thread one is trying to withdraw from bob.
status: row bob locked by t1
at this time, thread two withdraws from anne
status: row anne locked by thread 2
now thread 1 wants to deposit to anne
status: thread 1 sees row anne is locked so it sits and waits for the lock to be released so it can make the update: thread 1 is actually waiting on thread tweo fo finish the update and commit or roll back the the lock gets released
now thread two wants to deposit to bob
status: bob row is locked so thread two waits for its release
DEADLOCK !!!!!
two threads are waiting on each other.
so how do we solve it ? Please see the answers posted (I saw them while typing this) and please don't accept this answer but accept the one that you actually use to prevent deadlocks. I just wanted to talk about the other issues as I did and sorry for being that long.
You could use SELECT FOR UPDATE NOWAIT
on the row before trying to update it. If the row is already locked you will get an error (ORA-00054). Either wait a bit and retry(*) or throw an exception.
You should never run into deadlocks since they are so easily prevented.
(*) in this case you would have to retry the whole transaction (after a rollback) to prevent a deadlock situation.
Assuming the withdrawl and the deposit are part of a single database transaction, it should be relatively easy to avoid deadlocks simply by working with the accounts in order. If your application implemented transfers by debiting or crediting the lower account number first then debited or credited the higher account number, you would never be able to deadlock by issuing multiple concurrent transfers. It doesn't matter from a deadlock prevention standpoint what ordering you enforce (though it may matter for application performance) so long as you are consistent about enforcing that ordering.
精彩评论