开发者

Locking strategies and techniques for preventing deadlocks in code

开发者 https://www.devze.com 2023-03-06 15:08 出处:网络
The common solution to preventing deadlock in code is to make sure the sequence of locking occur in a common manner regardless of which thread is accessing the resources.

The common solution to preventing deadlock in code is to make sure the sequence of locking occur in a common manner regardless of which thread is accessing the resources.

For example given threads T1 and T2, where T1 accesses resource A and then B and T2 accesses resource B and then A. Locking the resources in the order they are needed causes a dead-lock. The simple solution is to lock A and then lock B, regardless of the order specific thread will use the resources.

Problematic situation:

Thread1                         Thread2
-------                         -------
Lock Resource A                 Lock Resource B
 Do Resource A thing...          Do Resource B thing...
Lock Resource B                 Lock Resource A
 Do Resource B thing...          Do开发者_StackOverflow Resource A thing...

Possible Solution:

Thread1                         Thread2
-------                         -------
Lock Resource A                 Lock Resource A
Lock Resource B                 Lock Resource B
 Do Resource A thing...          Do Resource B thing...
 Do Resource B thing...          Do Resource A thing...

My question is what other techniques, patterns or common practices are used in coding to guarantee dead lock prevention?


The technique you describe isn't just common: it's the one technique that has been proven to work all the time. There are a few other rules you should follow when coding threaded code in C++, though, among which the most important may be:

  • don't hold a lock when calling a virtual function: even if at the time you're writing your code you know which function will be called and what it will do, code evolves, and virtual functions are there to be overridden, so ultimately, you won't know what it does and whether it will take any other locks, meaning you will lose your guaranteed order of locking
  • watch out for race conditions: in C++, nothing will tell you when a given piece of data is shared between threads and you don't use some kind of synchronization on it. One example of this was posted in the C++ Lounge on SO chat a few days ago, by Luc, as an example of this (code at the end of this post): just trying to synchronize on something else that happens to be in the neighborhood doesn't mean your code is correctly synchronized.
  • try to hide asynchronous behavior: you're usually better hiding your concurrency in your software's architecture, such that most calling code won't care whether there's a thread there or not. It makes the architecture easier to work with - especially for some-one who isn't used to concurrency.

I could go on for a while, but in my experience, the easiest way to work with threads is using patterns that are well-known to everyone who might work with the code, such as the producer/consumer pattern: it's easy to explain and you only need one tool (a queue) to allow your threads to communicate with each other. After all, the only reason for two threads to be synchronized with each other, is to allow them to communicate.

More general advice:

  • Don't try your hand at lock-free programming until you've had experience with concurrent programming using locks - it's an easy way to blow your foot off, or run into very strange bugs.
  • Reduce the number of shared variables and the number of times those variables are accessed to a bare minimum.
  • Don't count on two events always occurring in the same order, even if you can't see any way of them reversing order.
  • More generally: don't count on timing - don't think a given task should always take a given amount of time.

The following code will fail:

#include <thread>
#include <cassert>
#include <chrono>
#include <iostream>
#include <mutex>
 
void
nothing_could_possibly_go_wrong()
{
    int flag = 0;
 
    std::condition_variable cond;
    std::mutex mutex;
    int done = 0;
    typedef std::unique_lock<std::mutex> lock;
 
    auto const f = [&]
    {
        if(flag == 0) ++flag;
        lock l(mutex);
        ++done;
        cond.notify_one();
    };
    std::thread threads[2] = {
        std::thread(f),
        std::thread(f)
    };
    threads[0].join();
    threads[1].join();
 
    lock l(mutex);
    cond.wait(l, [done] { return done == 2; });
 
    // surely this can't fail!
    assert( flag == 1 );
}
 
int
main()
{
    for(;;) nothing_could_possibly_go_wrong();
}


Consistent ordering of locking is pretty much the first and last word when it comes to deadlock avoidance.

There are related techniques, such as lockless programming (where no thread ever waits on a lock, and thus there is no possibility of a cycle), but that's really just a special case of the "avoid inconsistent locking order" rule -- i.e. they avoid inconsistent locking by avoiding all locking. Unfortunately, lockless programming has its own issues, so it's not a panacea either.

If you want to broaden the scope a bit, there are methods for detecting deadlocks when they do occur (if for some reason you can't design your program to avoid them), and ways for breaking deadlocks when they do occur (e.g. by always locking with a timeout, or by forcing one of the deadlocked threads to have their Lock() command fail, or even just by killing one of the deadlocked threads); but I think they are all pretty inferior to simply making sure deadlocks cannot happen in the first place.

(btw if you want an automated way to check whether your program has potential deadlocks in it, check out valgrind's helgrind tool. It will monitor your code's locking patterns and notify you of any inconsistencies -- very useful)


Another technique is transactional programming. This though is not very common as it usually involves specialized hardware (most of it currently only in research institutions).

Each resource keeps track of modifications from different threads. The first thread to commit changes to all resources (it is using) wins all other thread (using those resources) get rolled back to try again with the resources in the new committed state.

A simplistic starting point for reading on the subject is transactional memory.


While not an alternative to the known-sequence solution you mention, Andrei Alexandrescu wrote about some techniques for compile time checks that acquisition of locks is done through the intended mechanisms. See http://www.informit.com/articles/article.aspx?p=25298


You are asking about the design level, but I'll add some lower level, programming practices.

  • Classify each function (method) as blocking, non-blocking or having unknown blocking behaviour.
  • A blocking function is a function that acquires a lock, or calls a slow system call (which in practice means it does I/O), or calls a blocking function.
  • Whether a function is guaranteed to be non-blocking is part of the specification of that function, just like its preconditions and its degree of exception safety. It must therefore be documented as such. In Java I use an annotation; in C++ documented using Doxygen I'd use a forumalic phrase in the header commentary for the function.
  • Consider calling a function that is not specified to be non-blocking while holding a lock to be dangerous.
  • Refactor such dangerous code to eliminate the danger or to concentrate the danger into a small section of code (perhaps within its own function).
  • For the remaining dangerous code, provide an informal proof that the code is not actually dangerous in the commentary of the code.
0

精彩评论

暂无评论...
验证码 换一张
取 消