I have a scenario to discuss here for Peterson Algorithm:
flag[0] = 0;
flag[1] = 0;
turn;
P0: flag[0] = 1;
turn = 1;
while (flag[1] == 1 && turn == 1)
{
// busy wait
}
// critical section
...
// end of critical section
flag[0] = 0;
P1: flag[1] = 1;
turn = 0;
while (flag[0] == 1 && turn == 0)
{
// busy wait
}
// critical section
...
// end of critical section
flag[1] = 0;
Suppose both process start executing concurrently .P0 sets flag[0]=1 and die. Then P1 starts. Its while condition will be satisfied as flag[0]=1 ( set by P0 and turn =0) and it will stuck in this loop forever which is a dead lock.
So does Peterson Algorithm doesn't account for this case ? In case if dying of process in not to be considered while analyzing such algorithms then in Operating System Book by William Stalling, appendix A contain a series of algorithm for concurrency, starting with 4 incorrect algorithm for demonstration. It proves them incorrect by considering the case of dying of a process ( in addition to other cases) before completion but then claims Peterson Algorithm to be correct. I came across this thread which give me clue that there is a problem when considering N process( n>2) but for two process this algorithm works fine. So is there a problem in the analysis of the algorithm(suggested by one of my classmate and i fully second him) as discussed above or Peterson Algorithm doesn't claim deadlock with 2 process too?In this paper Some myths about famous mutual exclusion algorithms, the author concluded Peterson has never claimed that his algorithm assures bounded bypass.
Can unbounded bypass be thought 开发者_StackOverflow中文版of as reaching infinity as in case of deadlock ? Well in that case we can have less trouble in accepting that Peterson Algorithm can lead to deadlock.Certainly you could write code that would throw an unhandled exception, but the algorithm supposes that the executing process will always set its flag to false, after its critical section has executed. Therefore Peterson's algorithm does pass the 3 tests for critical sections.
1) Mutual exclusion - flag[0] and flag[1] can both be true, but turn can only be 0 or 1. Therefore only one of the two critical sections can be executed. The other will spin wait.
2) Progress - If process 0 is in the critical section, then turn = 0 and flag[0] is true. Once it has completed it's critical section (even if something catastrophic happens), it must set flag[0] to false. If process 1 was spin waiting, it will now free as flag[0] != true.
3) Bound-waiting - If Process 1 is waiting, Process 0 can only enter it's critical section once, before process 1 is given the green light, as explained in #2.
The problem with Peterson's Algorithm is that in modern architecture, a CPU cache could screw up the mutual exclusion requirement. The problem is called cache-coherence, and it is possible that the cache used by Processs 0 on CPU 0 sets flag[0] equal to true, while Process 1 on CPU 1 still thinks flag[0] is false. In this case, both critical sections would enter, and BANG... mutual exclusion fails, and race conditions are now possible.
You are right, Peterson's algorithm assumes processes do not fail while executing the part of the algorithm for synchronizing. I do not have access to the book you mention, but perhaps the other algorithms are incorrect because they do not account for processes failing outside of the synchronization parts (which is worse)?
Note: while still interesting historically, Peterson's algorithm also makes assumptions on the way memory work that are not valid with today's hardware.
Most locking algorithms don't account for a process dying while it is within the critical section (how can other processes distinguish between a process having died after taking a lock, versus a process merely taking a long time?).
A process dying when it is not in a critical section, however, shouldn't prevent other processes entering or leaving. For example, a critical section where two processes "take turns" to enter the critical section is problematic; if one process dies outside the critical section, the second waits forever for the first to have its turn. This is perhaps what your teacher was referring to.
(As a hacky solution, you could attempt to handle processes dying within a critical section with timeouts; if a process takes to long, you assume it has died. This comes at the risk of allowing two processes into a critical section if one takes too long, though.)
Even some of the semaphore methods fail if we assume this premature dying of the of the processes (try producer / consumer problem ) So , we cannot say that the algorithm is incorrect but just that it wasn't made for the purpose as we see it .These are all misconception-ed .
I agree with Fabio. Instead of saying "P0 sets flag[0]=1 and die", I would like to consider the scenario in which P0 is preempted by Scheduler and P1 is scheduled immediately after P0 sets its Flag[0] to 1.
Then both process gets trapped in busy wait, as :
Flag(0)=1, Flag(1)=1 and turn=0. It means P1 will busy wait as condition in while loop is true.
Now if P1 is preempted, let us say due to time out and P0 is scheduled by scheduler then: Flag(0)=1, Flag(1)=1 and turn=1. It means P0 will also busy wait as condition in while is true.
Now both the processes are busy waiting for each other and deadlock occurs. I don't know why this solution is so famous or are we missing something ....?
精彩评论