开发者

Volatile and Thread.MemoryBarrier in C#

开发者 https://www.devze.com 2023-01-26 14:41 出处:网络
To implement a lock free code for multithreading application I used volatile variables, Theoretically: The volatile keyword is simply used to make sure that all threads see the most updated value of a

To implement a lock free code for multithreading application I used volatile variables, Theoretically: The volatile keyword is simply used to make sure that all threads see the most updated value of a volatile variable; so if thread A updates the variable value and thread B read that variable just after that update is happened it will see the most updated value that written recently from thread A. As I read in a C# 4.0 in a Nutshell book that this is incorrect because

applying volatile doesn’t prevent a write followed by a read from being swapped.

Could this problem being solved by putting Thread.MemoryBarrier() before every get of the volatile variable like:

private volatile bool _foo = false;

private void A()
{
    //…
    Thread.MemoryBarrier();
    if (_foo)
    {
        //do somthing
    }
}

private void B()
{
    //…
    _foo = true;
    //…
}

And if this solves the problem; consider we have a while loop that depend on that value at one of its conditions; is putting Thread.MemoryBarrier() before the while loop is a开发者_开发技巧 correct way to fix the issue? example:

private void A()
{
    Thread.MemoryBarrier();
    while (_someOtherConditions && _foo)
    {
        // do somthing.
    }
}

To be more accurate I want the _foo variable to give its most fresh value when any thread asking for it at any time; so if inserting Thread.MemoryBarrier() before calling the variable fixes the issue then could I use Foo property instead of _foo and do a Thread.MemoryBarrier() within the get of that property Like:

Foo
{
    get 
    {
        Thread.MemoryBarrier();
        return _foo;
    }
    set
    {
        _foo = value;
    }
}


The "C# In a Nutshell" is correct, but its statement is moot. Why?

  • A 'write' followed by a 'read', withot 'volatile', is guaranteed to occur in program-order anyway if it effects logic within a single thread
  • The 'write' before a 'read' in a multi-threaded program is utterly pointless to worry about in your example.

Let's clarify. Take your original code:

private void A() 
{ 
    //… 
    if (_foo) 
    { 
        //do something 
    } 
}

What happens if the thread scheduler has already checked the _foo variable, but it gets suspended just before the //do something comment? Well, at that point your other thread could change the value of _foo, which means that all your volatiles and Thread.MemoryBarriers counted for nothing!!! If it is absolutely essential that the do_something be avoided if the value of _foo is false, then you have no choice but to use a lock.

However, if it is ok for the do something to be executing when suddenly _foo becomes false, then it means the volatile keyword was more than enough for your needs.

To be clear: all the responders who are telling you to use a memory barrier are incorrect or are providing overkill.


The book is correct.
The CLR's memory model indicates that load and store operations may be reordered. This goes for volatile and non-volatile variables.

Declaring a variable as volatile only means that load operations will have acquire semantics, and store operations will have release semantics. Also, the compiler will avoid performing certain optimizations that relay on the fact that the variable is accessed in a serialized, single-threaded fashion (e.g. hoisting load/stores out of loops).

Using the volatile keyword alone doesn't create critical sections, and it doesn't cause threads to magically synchronize with each other.

You should be extremely careful when you write lock free code. There's nothing simple about it, and even the experts have trouble to get it right.
Whatever is the original problem you're trying to solve, it's likely that there's a much more reasonable way to do it.


In your second example, you would need to also put a Thread.MemoryBarrier(); inside the loop, to make sure you get the most recent value every time you check the loop condition.


Pulled from here...

class Foo
{
  int _answer;
  bool _complete;

  void A()
  {
    _answer = 123;
    Thread.MemoryBarrier();    // Barrier 1
    _complete = true;
    Thread.MemoryBarrier();    // Barrier 2
  }

  void B()
  {
    Thread.MemoryBarrier();    // Barrier 3
    if (_complete)
    {
      Thread.MemoryBarrier();       // Barrier 4
      Console.WriteLine (_answer);
    }
  }
}

Barriers 1 and 4 prevent this example from writing “0”. Barriers 2 and 3 provide a freshness guarantee: they ensure that if B ran after A, reading _complete would evaluate to true.

So if we go back to your looping example...this is how it should look...

private void A()
{
    Thread.MemoryBarrier();
    while (_someOtherConditions && _foo)
    {
        //do somthing
        Thread.MemoryBarrier();
    }
}


Microsoft's own words on memory barriers:

MemoryBarrier is required only on multiprocessor systems with weak memory ordering (for example, a system employing multiple Intel Itanium processors).

For most purposes, the C# lock statement, the Visual Basic SyncLock statement, or the Monitor class provide easier ways to synchronize data.

0

精彩评论

暂无评论...
验证码 换一张
取 消